Test Report: KVM_Linux_crio 19319

                    
                      b956d22c0e4b666a5d5401b6edb64a8355930c4b:2024-07-23:35468
                    
                

Test fail (29/328)

Order failed test Duration
39 TestAddons/parallel/Ingress 153.87
41 TestAddons/parallel/MetricsServer 362.5
54 TestAddons/StoppedEnableDisable 154.22
173 TestMultiControlPlane/serial/StopSecondaryNode 141.81
175 TestMultiControlPlane/serial/RestartSecondaryNode 55.29
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 403.59
180 TestMultiControlPlane/serial/StopCluster 141.78
240 TestMultiNode/serial/RestartKeepsNodes 322.3
242 TestMultiNode/serial/StopMultiNode 141.17
249 TestPreload 331.66
257 TestKubernetesUpgrade 355.46
299 TestStartStop/group/old-k8s-version/serial/FirstStart 291.34
307 TestStartStop/group/no-preload/serial/Stop 138.93
310 TestStartStop/group/embed-certs/serial/Stop 139.15
311 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 111.07
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.96
324 TestStartStop/group/old-k8s-version/serial/SecondStart 749.77
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 545.49
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 545.65
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.12
330 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.36
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 395.01
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.37
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 355.23
334 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 103.61
x
+
TestAddons/parallel/Ingress (153.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-566823 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-566823 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-566823 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [df881e74-ce15-47aa-8763-8ee63ffc74ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [df881e74-ce15-47aa-8763-8ee63ffc74ae] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00325439s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-566823 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.2468676s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-566823 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.114
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-566823 addons disable ingress-dns --alsologtostderr -v=1: (1.100379345s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-566823 addons disable ingress --alsologtostderr -v=1: (7.667644853s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-566823 -n addons-566823
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-566823 logs -n 25: (1.184764764s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-788360                                                                     | download-only-788360 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-344682                                                                     | download-only-344682 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-055184                                                                     | download-only-055184 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-788360                                                                     | download-only-788360 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-132421 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC |                     |
	|         | binary-mirror-132421                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32931                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-132421                                                                     | binary-mirror-132421 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC |                     |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC |                     |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-566823 --wait=true                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:59 UTC | 23 Jul 24 13:59 UTC |
	|         | -p addons-566823                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-566823 ip                                                                            | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | -p addons-566823                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-566823 ssh cat                                                                       | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | /opt/local-path-provisioner/pvc-c8cbfc9c-f3f6-4373-91f9-dcf10e6a4265_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-566823 ssh curl -s                                                                   | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-566823 addons                                                                        | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:01 UTC | 23 Jul 24 14:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-566823 addons                                                                        | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:01 UTC | 23 Jul 24 14:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-566823 ip                                                                            | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:02 UTC | 23 Jul 24 14:02 UTC |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:02 UTC | 23 Jul 24 14:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:02 UTC | 23 Jul 24 14:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 13:57:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 13:57:26.258787   19502 out.go:291] Setting OutFile to fd 1 ...
	I0723 13:57:26.259024   19502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:57:26.259032   19502 out.go:304] Setting ErrFile to fd 2...
	I0723 13:57:26.259036   19502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:57:26.259194   19502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 13:57:26.259737   19502 out.go:298] Setting JSON to false
	I0723 13:57:26.260524   19502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2392,"bootTime":1721740654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 13:57:26.260579   19502 start.go:139] virtualization: kvm guest
	I0723 13:57:26.262666   19502 out.go:177] * [addons-566823] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 13:57:26.263904   19502 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 13:57:26.263958   19502 notify.go:220] Checking for updates...
	I0723 13:57:26.266370   19502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 13:57:26.267711   19502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 13:57:26.268942   19502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:57:26.270070   19502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 13:57:26.271292   19502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 13:57:26.272503   19502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 13:57:26.303755   19502 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 13:57:26.304876   19502 start.go:297] selected driver: kvm2
	I0723 13:57:26.304897   19502 start.go:901] validating driver "kvm2" against <nil>
	I0723 13:57:26.304922   19502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 13:57:26.305633   19502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:57:26.305722   19502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 13:57:26.319951   19502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 13:57:26.319997   19502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 13:57:26.320229   19502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 13:57:26.320293   19502 cni.go:84] Creating CNI manager for ""
	I0723 13:57:26.320320   19502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:57:26.320328   19502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 13:57:26.320406   19502 start.go:340] cluster config:
	{Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 13:57:26.320547   19502 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:57:26.322258   19502 out.go:177] * Starting "addons-566823" primary control-plane node in "addons-566823" cluster
	I0723 13:57:26.323420   19502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 13:57:26.323450   19502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 13:57:26.323459   19502 cache.go:56] Caching tarball of preloaded images
	I0723 13:57:26.323536   19502 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 13:57:26.323548   19502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 13:57:26.323866   19502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/config.json ...
	I0723 13:57:26.323889   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/config.json: {Name:mk9521b81ec09d3952c01470afbc69b6bbfc2443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:26.324033   19502 start.go:360] acquireMachinesLock for addons-566823: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 13:57:26.324091   19502 start.go:364] duration metric: took 41.807µs to acquireMachinesLock for "addons-566823"
	I0723 13:57:26.324111   19502 start.go:93] Provisioning new machine with config: &{Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 13:57:26.324189   19502 start.go:125] createHost starting for "" (driver="kvm2")
	I0723 13:57:26.326081   19502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0723 13:57:26.326239   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:57:26.326284   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:57:26.340398   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0723 13:57:26.340784   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:57:26.341245   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:57:26.341262   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:57:26.341593   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:57:26.341747   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:26.341865   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:26.341980   19502 start.go:159] libmachine.API.Create for "addons-566823" (driver="kvm2")
	I0723 13:57:26.342009   19502 client.go:168] LocalClient.Create starting
	I0723 13:57:26.342050   19502 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 13:57:26.627266   19502 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 13:57:26.784601   19502 main.go:141] libmachine: Running pre-create checks...
	I0723 13:57:26.784625   19502 main.go:141] libmachine: (addons-566823) Calling .PreCreateCheck
	I0723 13:57:26.785101   19502 main.go:141] libmachine: (addons-566823) Calling .GetConfigRaw
	I0723 13:57:26.785541   19502 main.go:141] libmachine: Creating machine...
	I0723 13:57:26.785556   19502 main.go:141] libmachine: (addons-566823) Calling .Create
	I0723 13:57:26.785716   19502 main.go:141] libmachine: (addons-566823) Creating KVM machine...
	I0723 13:57:26.787096   19502 main.go:141] libmachine: (addons-566823) DBG | found existing default KVM network
	I0723 13:57:26.787808   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:26.787639   19524 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0723 13:57:26.787841   19502 main.go:141] libmachine: (addons-566823) DBG | created network xml: 
	I0723 13:57:26.787857   19502 main.go:141] libmachine: (addons-566823) DBG | <network>
	I0723 13:57:26.787869   19502 main.go:141] libmachine: (addons-566823) DBG |   <name>mk-addons-566823</name>
	I0723 13:57:26.787880   19502 main.go:141] libmachine: (addons-566823) DBG |   <dns enable='no'/>
	I0723 13:57:26.787891   19502 main.go:141] libmachine: (addons-566823) DBG |   
	I0723 13:57:26.787904   19502 main.go:141] libmachine: (addons-566823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0723 13:57:26.787921   19502 main.go:141] libmachine: (addons-566823) DBG |     <dhcp>
	I0723 13:57:26.787932   19502 main.go:141] libmachine: (addons-566823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0723 13:57:26.787940   19502 main.go:141] libmachine: (addons-566823) DBG |     </dhcp>
	I0723 13:57:26.787949   19502 main.go:141] libmachine: (addons-566823) DBG |   </ip>
	I0723 13:57:26.787955   19502 main.go:141] libmachine: (addons-566823) DBG |   
	I0723 13:57:26.787963   19502 main.go:141] libmachine: (addons-566823) DBG | </network>
	I0723 13:57:26.787969   19502 main.go:141] libmachine: (addons-566823) DBG | 
	I0723 13:57:26.792991   19502 main.go:141] libmachine: (addons-566823) DBG | trying to create private KVM network mk-addons-566823 192.168.39.0/24...
	I0723 13:57:26.858781   19502 main.go:141] libmachine: (addons-566823) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823 ...
	I0723 13:57:26.858824   19502 main.go:141] libmachine: (addons-566823) DBG | private KVM network mk-addons-566823 192.168.39.0/24 created
	I0723 13:57:26.858840   19502 main.go:141] libmachine: (addons-566823) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 13:57:26.858857   19502 main.go:141] libmachine: (addons-566823) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 13:57:26.858868   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:26.858719   19524 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:57:27.110056   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:27.109886   19524 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa...
	I0723 13:57:27.245741   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:27.245626   19524 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/addons-566823.rawdisk...
	I0723 13:57:27.245763   19502 main.go:141] libmachine: (addons-566823) DBG | Writing magic tar header
	I0723 13:57:27.245775   19502 main.go:141] libmachine: (addons-566823) DBG | Writing SSH key tar header
	I0723 13:57:27.245887   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:27.245806   19524 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823 ...
	I0723 13:57:27.245922   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823
	I0723 13:57:27.245976   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823 (perms=drwx------)
	I0723 13:57:27.245996   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 13:57:27.246008   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 13:57:27.246027   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:57:27.246040   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 13:57:27.246049   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 13:57:27.246061   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 13:57:27.246071   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins
	I0723 13:57:27.246082   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 13:57:27.246094   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home
	I0723 13:57:27.246108   19502 main.go:141] libmachine: (addons-566823) DBG | Skipping /home - not owner
	I0723 13:57:27.246121   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 13:57:27.246132   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 13:57:27.246149   19502 main.go:141] libmachine: (addons-566823) Creating domain...
	I0723 13:57:27.247150   19502 main.go:141] libmachine: (addons-566823) define libvirt domain using xml: 
	I0723 13:57:27.247173   19502 main.go:141] libmachine: (addons-566823) <domain type='kvm'>
	I0723 13:57:27.247183   19502 main.go:141] libmachine: (addons-566823)   <name>addons-566823</name>
	I0723 13:57:27.247190   19502 main.go:141] libmachine: (addons-566823)   <memory unit='MiB'>4000</memory>
	I0723 13:57:27.247199   19502 main.go:141] libmachine: (addons-566823)   <vcpu>2</vcpu>
	I0723 13:57:27.247211   19502 main.go:141] libmachine: (addons-566823)   <features>
	I0723 13:57:27.247223   19502 main.go:141] libmachine: (addons-566823)     <acpi/>
	I0723 13:57:27.247229   19502 main.go:141] libmachine: (addons-566823)     <apic/>
	I0723 13:57:27.247234   19502 main.go:141] libmachine: (addons-566823)     <pae/>
	I0723 13:57:27.247239   19502 main.go:141] libmachine: (addons-566823)     
	I0723 13:57:27.247245   19502 main.go:141] libmachine: (addons-566823)   </features>
	I0723 13:57:27.247254   19502 main.go:141] libmachine: (addons-566823)   <cpu mode='host-passthrough'>
	I0723 13:57:27.247258   19502 main.go:141] libmachine: (addons-566823)   
	I0723 13:57:27.247264   19502 main.go:141] libmachine: (addons-566823)   </cpu>
	I0723 13:57:27.247269   19502 main.go:141] libmachine: (addons-566823)   <os>
	I0723 13:57:27.247277   19502 main.go:141] libmachine: (addons-566823)     <type>hvm</type>
	I0723 13:57:27.247286   19502 main.go:141] libmachine: (addons-566823)     <boot dev='cdrom'/>
	I0723 13:57:27.247296   19502 main.go:141] libmachine: (addons-566823)     <boot dev='hd'/>
	I0723 13:57:27.247318   19502 main.go:141] libmachine: (addons-566823)     <bootmenu enable='no'/>
	I0723 13:57:27.247323   19502 main.go:141] libmachine: (addons-566823)   </os>
	I0723 13:57:27.247331   19502 main.go:141] libmachine: (addons-566823)   <devices>
	I0723 13:57:27.247346   19502 main.go:141] libmachine: (addons-566823)     <disk type='file' device='cdrom'>
	I0723 13:57:27.247361   19502 main.go:141] libmachine: (addons-566823)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/boot2docker.iso'/>
	I0723 13:57:27.247372   19502 main.go:141] libmachine: (addons-566823)       <target dev='hdc' bus='scsi'/>
	I0723 13:57:27.247380   19502 main.go:141] libmachine: (addons-566823)       <readonly/>
	I0723 13:57:27.247387   19502 main.go:141] libmachine: (addons-566823)     </disk>
	I0723 13:57:27.247394   19502 main.go:141] libmachine: (addons-566823)     <disk type='file' device='disk'>
	I0723 13:57:27.247402   19502 main.go:141] libmachine: (addons-566823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 13:57:27.247428   19502 main.go:141] libmachine: (addons-566823)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/addons-566823.rawdisk'/>
	I0723 13:57:27.247450   19502 main.go:141] libmachine: (addons-566823)       <target dev='hda' bus='virtio'/>
	I0723 13:57:27.247460   19502 main.go:141] libmachine: (addons-566823)     </disk>
	I0723 13:57:27.247468   19502 main.go:141] libmachine: (addons-566823)     <interface type='network'>
	I0723 13:57:27.247475   19502 main.go:141] libmachine: (addons-566823)       <source network='mk-addons-566823'/>
	I0723 13:57:27.247482   19502 main.go:141] libmachine: (addons-566823)       <model type='virtio'/>
	I0723 13:57:27.247487   19502 main.go:141] libmachine: (addons-566823)     </interface>
	I0723 13:57:27.247494   19502 main.go:141] libmachine: (addons-566823)     <interface type='network'>
	I0723 13:57:27.247500   19502 main.go:141] libmachine: (addons-566823)       <source network='default'/>
	I0723 13:57:27.247507   19502 main.go:141] libmachine: (addons-566823)       <model type='virtio'/>
	I0723 13:57:27.247512   19502 main.go:141] libmachine: (addons-566823)     </interface>
	I0723 13:57:27.247518   19502 main.go:141] libmachine: (addons-566823)     <serial type='pty'>
	I0723 13:57:27.247525   19502 main.go:141] libmachine: (addons-566823)       <target port='0'/>
	I0723 13:57:27.247540   19502 main.go:141] libmachine: (addons-566823)     </serial>
	I0723 13:57:27.247552   19502 main.go:141] libmachine: (addons-566823)     <console type='pty'>
	I0723 13:57:27.247560   19502 main.go:141] libmachine: (addons-566823)       <target type='serial' port='0'/>
	I0723 13:57:27.247565   19502 main.go:141] libmachine: (addons-566823)     </console>
	I0723 13:57:27.247572   19502 main.go:141] libmachine: (addons-566823)     <rng model='virtio'>
	I0723 13:57:27.247579   19502 main.go:141] libmachine: (addons-566823)       <backend model='random'>/dev/random</backend>
	I0723 13:57:27.247585   19502 main.go:141] libmachine: (addons-566823)     </rng>
	I0723 13:57:27.247591   19502 main.go:141] libmachine: (addons-566823)     
	I0723 13:57:27.247597   19502 main.go:141] libmachine: (addons-566823)     
	I0723 13:57:27.247602   19502 main.go:141] libmachine: (addons-566823)   </devices>
	I0723 13:57:27.247609   19502 main.go:141] libmachine: (addons-566823) </domain>
	I0723 13:57:27.247619   19502 main.go:141] libmachine: (addons-566823) 
	I0723 13:57:27.253594   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:7f:41:11 in network default
	I0723 13:57:27.254205   19502 main.go:141] libmachine: (addons-566823) Ensuring networks are active...
	I0723 13:57:27.254223   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:27.255123   19502 main.go:141] libmachine: (addons-566823) Ensuring network default is active
	I0723 13:57:27.255511   19502 main.go:141] libmachine: (addons-566823) Ensuring network mk-addons-566823 is active
	I0723 13:57:27.255998   19502 main.go:141] libmachine: (addons-566823) Getting domain xml...
	I0723 13:57:27.256856   19502 main.go:141] libmachine: (addons-566823) Creating domain...
	I0723 13:57:28.697829   19502 main.go:141] libmachine: (addons-566823) Waiting to get IP...
	I0723 13:57:28.698600   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:28.699020   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:28.699042   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:28.699002   19524 retry.go:31] will retry after 307.94193ms: waiting for machine to come up
	I0723 13:57:29.008603   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:29.008986   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:29.009013   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:29.008949   19524 retry.go:31] will retry after 384.73915ms: waiting for machine to come up
	I0723 13:57:29.396898   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:29.397404   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:29.397435   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:29.397335   19524 retry.go:31] will retry after 426.861857ms: waiting for machine to come up
	I0723 13:57:29.825896   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:29.826286   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:29.826327   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:29.826251   19524 retry.go:31] will retry after 439.359176ms: waiting for machine to come up
	I0723 13:57:30.266982   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:30.267497   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:30.267527   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:30.267432   19524 retry.go:31] will retry after 536.9439ms: waiting for machine to come up
	I0723 13:57:30.806186   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:30.806607   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:30.806635   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:30.806566   19524 retry.go:31] will retry after 615.974579ms: waiting for machine to come up
	I0723 13:57:31.423980   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:31.424516   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:31.424544   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:31.424481   19524 retry.go:31] will retry after 786.794896ms: waiting for machine to come up
	I0723 13:57:32.212282   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:32.212640   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:32.212668   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:32.212600   19524 retry.go:31] will retry after 1.0057163s: waiting for machine to come up
	I0723 13:57:33.219712   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:33.220118   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:33.220143   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:33.220076   19524 retry.go:31] will retry after 1.30408869s: waiting for machine to come up
	I0723 13:57:34.526732   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:34.527161   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:34.527182   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:34.527126   19524 retry.go:31] will retry after 2.04064909s: waiting for machine to come up
	I0723 13:57:36.569195   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:36.569672   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:36.569699   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:36.569626   19524 retry.go:31] will retry after 1.957363737s: waiting for machine to come up
	I0723 13:57:38.529699   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:38.530174   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:38.530198   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:38.530084   19524 retry.go:31] will retry after 2.759683998s: waiting for machine to come up
	I0723 13:57:41.293038   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:41.293546   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:41.293569   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:41.293474   19524 retry.go:31] will retry after 3.612061693s: waiting for machine to come up
	I0723 13:57:44.909592   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:44.910080   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:44.910103   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:44.910036   19524 retry.go:31] will retry after 5.185969246s: waiting for machine to come up
	I0723 13:57:50.100167   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.100556   19502 main.go:141] libmachine: (addons-566823) Found IP for machine: 192.168.39.114
	I0723 13:57:50.100580   19502 main.go:141] libmachine: (addons-566823) Reserving static IP address...
	I0723 13:57:50.100593   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has current primary IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.100944   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find host DHCP lease matching {name: "addons-566823", mac: "52:54:00:41:2b:ac", ip: "192.168.39.114"} in network mk-addons-566823
	I0723 13:57:50.171662   19502 main.go:141] libmachine: (addons-566823) DBG | Getting to WaitForSSH function...
	I0723 13:57:50.171687   19502 main.go:141] libmachine: (addons-566823) Reserved static IP address: 192.168.39.114
	I0723 13:57:50.171700   19502 main.go:141] libmachine: (addons-566823) Waiting for SSH to be available...
	I0723 13:57:50.174271   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.174718   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.174754   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.174975   19502 main.go:141] libmachine: (addons-566823) DBG | Using SSH client type: external
	I0723 13:57:50.175018   19502 main.go:141] libmachine: (addons-566823) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa (-rw-------)
	I0723 13:57:50.175047   19502 main.go:141] libmachine: (addons-566823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 13:57:50.175064   19502 main.go:141] libmachine: (addons-566823) DBG | About to run SSH command:
	I0723 13:57:50.175101   19502 main.go:141] libmachine: (addons-566823) DBG | exit 0
	I0723 13:57:50.302235   19502 main.go:141] libmachine: (addons-566823) DBG | SSH cmd err, output: <nil>: 
	I0723 13:57:50.302531   19502 main.go:141] libmachine: (addons-566823) KVM machine creation complete!
	I0723 13:57:50.302848   19502 main.go:141] libmachine: (addons-566823) Calling .GetConfigRaw
	I0723 13:57:50.303333   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:50.303574   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:50.303763   19502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 13:57:50.303779   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:57:50.305020   19502 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 13:57:50.305035   19502 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 13:57:50.305042   19502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 13:57:50.305047   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.307430   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.307793   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.307820   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.307919   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.308122   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.308268   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.308429   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.308670   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.308880   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.308894   19502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 13:57:50.405582   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 13:57:50.405605   19502 main.go:141] libmachine: Detecting the provisioner...
	I0723 13:57:50.405614   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.408642   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.408967   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.408991   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.409164   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.409346   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.409545   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.409678   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.409834   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.410027   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.410039   19502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 13:57:50.506663   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 13:57:50.506763   19502 main.go:141] libmachine: found compatible host: buildroot
	I0723 13:57:50.506778   19502 main.go:141] libmachine: Provisioning with buildroot...
	I0723 13:57:50.506789   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:50.507035   19502 buildroot.go:166] provisioning hostname "addons-566823"
	I0723 13:57:50.507059   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:50.507262   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.510208   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.510607   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.510633   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.510801   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.510976   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.511110   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.511237   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.511415   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.511582   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.511595   19502 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-566823 && echo "addons-566823" | sudo tee /etc/hostname
	I0723 13:57:50.624287   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-566823
	
	I0723 13:57:50.624316   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.626776   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.627128   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.627156   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.627361   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.627544   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.627770   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.627943   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.628110   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.628279   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.628302   19502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-566823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-566823/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-566823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 13:57:50.734982   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 13:57:50.735008   19502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 13:57:50.735031   19502 buildroot.go:174] setting up certificates
	I0723 13:57:50.735044   19502 provision.go:84] configureAuth start
	I0723 13:57:50.735056   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:50.735334   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:50.738308   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.738817   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.738841   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.739019   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.741385   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.741700   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.741718   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.741868   19502 provision.go:143] copyHostCerts
	I0723 13:57:50.741937   19502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 13:57:50.742064   19502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 13:57:50.742145   19502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 13:57:50.742207   19502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.addons-566823 san=[127.0.0.1 192.168.39.114 addons-566823 localhost minikube]
	I0723 13:57:50.871458   19502 provision.go:177] copyRemoteCerts
	I0723 13:57:50.871532   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 13:57:50.871560   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.874470   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.874754   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.874783   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.874931   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.875098   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.875240   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.875343   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:50.952409   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 13:57:50.974842   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 13:57:50.996745   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 13:57:51.021093   19502 provision.go:87] duration metric: took 286.036544ms to configureAuth
	I0723 13:57:51.021119   19502 buildroot.go:189] setting minikube options for container-runtime
	I0723 13:57:51.021285   19502 config.go:182] Loaded profile config "addons-566823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 13:57:51.021371   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.023995   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.024327   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.024353   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.024542   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.024810   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.024999   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.025156   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.025404   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:51.025563   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:51.025580   19502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 13:57:51.273761   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 13:57:51.273788   19502 main.go:141] libmachine: Checking connection to Docker...
	I0723 13:57:51.273800   19502 main.go:141] libmachine: (addons-566823) Calling .GetURL
	I0723 13:57:51.275209   19502 main.go:141] libmachine: (addons-566823) DBG | Using libvirt version 6000000
	I0723 13:57:51.277390   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.277733   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.277750   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.277986   19502 main.go:141] libmachine: Docker is up and running!
	I0723 13:57:51.278007   19502 main.go:141] libmachine: Reticulating splines...
	I0723 13:57:51.278014   19502 client.go:171] duration metric: took 24.935997246s to LocalClient.Create
	I0723 13:57:51.278041   19502 start.go:167] duration metric: took 24.936063055s to libmachine.API.Create "addons-566823"
	I0723 13:57:51.278051   19502 start.go:293] postStartSetup for "addons-566823" (driver="kvm2")
	I0723 13:57:51.278061   19502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 13:57:51.278077   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.278461   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 13:57:51.278484   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.280896   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.281145   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.281177   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.281317   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.281507   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.281653   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.281782   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:51.360282   19502 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 13:57:51.364398   19502 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 13:57:51.364421   19502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 13:57:51.364501   19502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 13:57:51.364548   19502 start.go:296] duration metric: took 86.489306ms for postStartSetup
	I0723 13:57:51.364586   19502 main.go:141] libmachine: (addons-566823) Calling .GetConfigRaw
	I0723 13:57:51.365074   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:51.367613   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.367951   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.367980   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.368199   19502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/config.json ...
	I0723 13:57:51.368388   19502 start.go:128] duration metric: took 25.044188254s to createHost
	I0723 13:57:51.368412   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.370626   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.370878   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.370904   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.371084   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.371250   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.371417   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.371531   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.371681   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:51.371831   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:51.371845   19502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 13:57:51.470736   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721743071.449258583
	
	I0723 13:57:51.470761   19502 fix.go:216] guest clock: 1721743071.449258583
	I0723 13:57:51.470769   19502 fix.go:229] Guest: 2024-07-23 13:57:51.449258583 +0000 UTC Remote: 2024-07-23 13:57:51.368400792 +0000 UTC m=+25.142952707 (delta=80.857791ms)
	I0723 13:57:51.470787   19502 fix.go:200] guest clock delta is within tolerance: 80.857791ms
	I0723 13:57:51.470793   19502 start.go:83] releasing machines lock for "addons-566823", held for 25.146690322s
	I0723 13:57:51.470818   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.471104   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:51.473941   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.474452   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.474470   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.474680   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.475226   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.475420   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.475514   19502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 13:57:51.475564   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.475677   19502 ssh_runner.go:195] Run: cat /version.json
	I0723 13:57:51.475704   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.478452   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.478557   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.478819   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.478850   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.478948   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.478950   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.478984   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.479100   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.479163   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.479243   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.479332   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.479390   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:51.479462   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.479607   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:51.550829   19502 ssh_runner.go:195] Run: systemctl --version
	I0723 13:57:51.584947   19502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 13:57:51.738932   19502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 13:57:51.744575   19502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 13:57:51.744639   19502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 13:57:51.759140   19502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 13:57:51.759164   19502 start.go:495] detecting cgroup driver to use...
	I0723 13:57:51.759218   19502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 13:57:51.779838   19502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 13:57:51.793147   19502 docker.go:217] disabling cri-docker service (if available) ...
	I0723 13:57:51.793195   19502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 13:57:51.805781   19502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 13:57:51.818438   19502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 13:57:51.923193   19502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 13:57:52.046606   19502 docker.go:233] disabling docker service ...
	I0723 13:57:52.046668   19502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 13:57:52.060915   19502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 13:57:52.073705   19502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 13:57:52.215736   19502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 13:57:52.326953   19502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 13:57:52.341293   19502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 13:57:52.358731   19502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 13:57:52.358801   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.368726   19502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 13:57:52.368821   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.378911   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.388508   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.398355   19502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 13:57:52.407985   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.417845   19502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.433589   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.443392   19502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 13:57:52.452658   19502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 13:57:52.452737   19502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 13:57:52.466357   19502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 13:57:52.475851   19502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 13:57:52.591390   19502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 13:57:52.722695   19502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 13:57:52.722782   19502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 13:57:52.726976   19502 start.go:563] Will wait 60s for crictl version
	I0723 13:57:52.727039   19502 ssh_runner.go:195] Run: which crictl
	I0723 13:57:52.730321   19502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 13:57:52.766023   19502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 13:57:52.766144   19502 ssh_runner.go:195] Run: crio --version
	I0723 13:57:52.791208   19502 ssh_runner.go:195] Run: crio --version
	I0723 13:57:52.817964   19502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 13:57:52.819330   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:52.821772   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:52.822119   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:52.822145   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:52.822373   19502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 13:57:52.826252   19502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 13:57:52.837740   19502 kubeadm.go:883] updating cluster {Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 13:57:52.837835   19502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 13:57:52.837876   19502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 13:57:52.868970   19502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 13:57:52.869040   19502 ssh_runner.go:195] Run: which lz4
	I0723 13:57:52.872752   19502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 13:57:52.876744   19502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 13:57:52.876774   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 13:57:54.052206   19502 crio.go:462] duration metric: took 1.179478604s to copy over tarball
	I0723 13:57:54.052283   19502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 13:57:56.274956   19502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.222640378s)
	I0723 13:57:56.274986   19502 crio.go:469] duration metric: took 2.222757664s to extract the tarball
	I0723 13:57:56.274994   19502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 13:57:56.318004   19502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 13:57:56.356951   19502 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 13:57:56.356975   19502 cache_images.go:84] Images are preloaded, skipping loading
	I0723 13:57:56.356983   19502 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.30.3 crio true true} ...
	I0723 13:57:56.357081   19502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-566823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 13:57:56.357146   19502 ssh_runner.go:195] Run: crio config
	I0723 13:57:56.412554   19502 cni.go:84] Creating CNI manager for ""
	I0723 13:57:56.412578   19502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:57:56.412587   19502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 13:57:56.412607   19502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-566823 NodeName:addons-566823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 13:57:56.412748   19502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-566823"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 13:57:56.412821   19502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 13:57:56.422155   19502 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 13:57:56.422220   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 13:57:56.431010   19502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0723 13:57:56.446690   19502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 13:57:56.462055   19502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0723 13:57:56.477917   19502 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I0723 13:57:56.481648   19502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 13:57:56.492533   19502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 13:57:56.601403   19502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 13:57:56.616573   19502 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823 for IP: 192.168.39.114
	I0723 13:57:56.616599   19502 certs.go:194] generating shared ca certs ...
	I0723 13:57:56.616618   19502 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.616787   19502 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 13:57:56.785134   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt ...
	I0723 13:57:56.785160   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt: {Name:mk36e09d7ac6dd29f323e105c718380c8b560655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.785312   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key ...
	I0723 13:57:56.785323   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key: {Name:mk5bb118f835953a95454c83f6da991c61082a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.785388   19502 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 13:57:56.977261   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt ...
	I0723 13:57:56.977289   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt: {Name:mkbb8d91dd4e6e1519ac2b5cb44d6ea526cac429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.977443   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key ...
	I0723 13:57:56.977453   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key: {Name:mk2b563123a7ab0f3949cbb2747ecfbeb56e3787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.977519   19502 certs.go:256] generating profile certs ...
	I0723 13:57:56.977567   19502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.key
	I0723 13:57:56.977579   19502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt with IP's: []
	I0723 13:57:57.158641   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt ...
	I0723 13:57:57.158676   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: {Name:mkb0b599bc3001e92419b5765ab8147765f8a443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.158854   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.key ...
	I0723 13:57:57.158866   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.key: {Name:mk564281decac921298dfd4cb0f95eec8dcd82fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.158941   19502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37
	I0723 13:57:57.158962   19502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114]
	I0723 13:57:57.273104   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37 ...
	I0723 13:57:57.273141   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37: {Name:mk688de0645539df463633501160ac13657adeb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.273314   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37 ...
	I0723 13:57:57.273328   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37: {Name:mk546ea975be19c9ea55e5a690a20c03fc692153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.273405   19502 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt
	I0723 13:57:57.273481   19502 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key
	I0723 13:57:57.273533   19502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key
	I0723 13:57:57.273552   19502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt with IP's: []
	I0723 13:57:57.621318   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt ...
	I0723 13:57:57.621350   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt: {Name:mkedd6d6ace9f091aa971fec0c1f4d45184621c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.621515   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key ...
	I0723 13:57:57.621526   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key: {Name:mk551dbe38aa839bf357f2e08713ad68f188b641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.621697   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 13:57:57.621732   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 13:57:57.621758   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 13:57:57.621784   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 13:57:57.622333   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 13:57:57.646802   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 13:57:57.677287   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 13:57:57.700767   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 13:57:57.723764   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0723 13:57:57.745931   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 13:57:57.768798   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 13:57:57.791556   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 13:57:57.814057   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 13:57:57.836303   19502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 13:57:57.851639   19502 ssh_runner.go:195] Run: openssl version
	I0723 13:57:57.857262   19502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 13:57:57.867796   19502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 13:57:57.871949   19502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 13:57:57.872006   19502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 13:57:57.877811   19502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 13:57:57.888462   19502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 13:57:57.892983   19502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 13:57:57.893036   19502 kubeadm.go:392] StartCluster: {Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 13:57:57.893117   19502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 13:57:57.893176   19502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 13:57:57.932055   19502 cri.go:89] found id: ""
	I0723 13:57:57.932120   19502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 13:57:57.944148   19502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 13:57:57.969796   19502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 13:57:57.982882   19502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 13:57:57.982905   19502 kubeadm.go:157] found existing configuration files:
	
	I0723 13:57:57.982948   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 13:57:57.998899   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 13:57:57.998962   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 13:57:58.008834   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 13:57:58.017800   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 13:57:58.017860   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 13:57:58.027707   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 13:57:58.037279   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 13:57:58.037333   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 13:57:58.047150   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 13:57:58.056500   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 13:57:58.056565   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 13:57:58.066135   19502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 13:57:58.260519   19502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 13:58:08.472829   19502 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 13:58:08.472922   19502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 13:58:08.472991   19502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 13:58:08.473126   19502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 13:58:08.473237   19502 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 13:58:08.473332   19502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 13:58:08.475130   19502 out.go:204]   - Generating certificates and keys ...
	I0723 13:58:08.475209   19502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 13:58:08.475286   19502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 13:58:08.475349   19502 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 13:58:08.475397   19502 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 13:58:08.475448   19502 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 13:58:08.475491   19502 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 13:58:08.475544   19502 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 13:58:08.475719   19502 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-566823 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0723 13:58:08.475781   19502 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 13:58:08.475881   19502 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-566823 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0723 13:58:08.475935   19502 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 13:58:08.475992   19502 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 13:58:08.476030   19502 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 13:58:08.476127   19502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 13:58:08.476189   19502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 13:58:08.476236   19502 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 13:58:08.476285   19502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 13:58:08.476348   19502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 13:58:08.476396   19502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 13:58:08.476468   19502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 13:58:08.476527   19502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 13:58:08.478016   19502 out.go:204]   - Booting up control plane ...
	I0723 13:58:08.478106   19502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 13:58:08.478171   19502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 13:58:08.478227   19502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 13:58:08.478324   19502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 13:58:08.478430   19502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 13:58:08.478482   19502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 13:58:08.478635   19502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 13:58:08.478737   19502 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 13:58:08.478823   19502 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 515.057931ms
	I0723 13:58:08.478893   19502 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 13:58:08.478946   19502 kubeadm.go:310] [api-check] The API server is healthy after 5.002578534s
	I0723 13:58:08.479033   19502 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 13:58:08.479139   19502 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 13:58:08.479202   19502 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 13:58:08.479482   19502 kubeadm.go:310] [mark-control-plane] Marking the node addons-566823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 13:58:08.479568   19502 kubeadm.go:310] [bootstrap-token] Using token: uyhqod.zgrugty1wvig1w59
	I0723 13:58:08.481081   19502 out.go:204]   - Configuring RBAC rules ...
	I0723 13:58:08.481205   19502 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 13:58:08.481307   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 13:58:08.481486   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 13:58:08.481658   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 13:58:08.481758   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 13:58:08.481873   19502 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 13:58:08.482037   19502 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 13:58:08.482108   19502 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 13:58:08.482161   19502 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 13:58:08.482167   19502 kubeadm.go:310] 
	I0723 13:58:08.482215   19502 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 13:58:08.482221   19502 kubeadm.go:310] 
	I0723 13:58:08.482287   19502 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 13:58:08.482295   19502 kubeadm.go:310] 
	I0723 13:58:08.482327   19502 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 13:58:08.482391   19502 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 13:58:08.482468   19502 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 13:58:08.482476   19502 kubeadm.go:310] 
	I0723 13:58:08.482535   19502 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 13:58:08.482551   19502 kubeadm.go:310] 
	I0723 13:58:08.482615   19502 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 13:58:08.482628   19502 kubeadm.go:310] 
	I0723 13:58:08.482677   19502 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 13:58:08.482741   19502 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 13:58:08.482824   19502 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 13:58:08.482833   19502 kubeadm.go:310] 
	I0723 13:58:08.482947   19502 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 13:58:08.483059   19502 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 13:58:08.483067   19502 kubeadm.go:310] 
	I0723 13:58:08.483154   19502 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uyhqod.zgrugty1wvig1w59 \
	I0723 13:58:08.483266   19502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 13:58:08.483298   19502 kubeadm.go:310] 	--control-plane 
	I0723 13:58:08.483306   19502 kubeadm.go:310] 
	I0723 13:58:08.483421   19502 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 13:58:08.483430   19502 kubeadm.go:310] 
	I0723 13:58:08.483546   19502 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uyhqod.zgrugty1wvig1w59 \
	I0723 13:58:08.483713   19502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 13:58:08.483732   19502 cni.go:84] Creating CNI manager for ""
	I0723 13:58:08.483741   19502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:58:08.485463   19502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 13:58:08.486819   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 13:58:08.497439   19502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 13:58:08.516918   19502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 13:58:08.516985   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:08.517046   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-566823 minikube.k8s.io/updated_at=2024_07_23T13_58_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=addons-566823 minikube.k8s.io/primary=true
	I0723 13:58:08.537181   19502 ops.go:34] apiserver oom_adj: -16
	I0723 13:58:08.633215   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:09.134020   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:09.633460   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:10.133970   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:10.633381   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:11.133664   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:11.633388   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:12.133248   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:12.634254   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:13.134061   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:13.634245   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:14.133413   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:14.633421   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:15.133822   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:15.633240   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:16.133347   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:16.634079   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:17.133700   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:17.633697   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:18.133777   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:18.633494   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:19.133930   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:19.633957   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:20.133368   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:20.633957   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:21.133510   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:21.634019   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:21.728145   19502 kubeadm.go:1113] duration metric: took 13.211219421s to wait for elevateKubeSystemPrivileges
	I0723 13:58:21.728174   19502 kubeadm.go:394] duration metric: took 23.835142379s to StartCluster
	I0723 13:58:21.728194   19502 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:58:21.728327   19502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 13:58:21.728966   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:58:21.729216   19502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0723 13:58:21.729246   19502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 13:58:21.729290   19502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0723 13:58:21.729401   19502 addons.go:69] Setting yakd=true in profile "addons-566823"
	I0723 13:58:21.729433   19502 addons.go:234] Setting addon yakd=true in "addons-566823"
	I0723 13:58:21.729468   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729522   19502 addons.go:69] Setting inspektor-gadget=true in profile "addons-566823"
	I0723 13:58:21.729539   19502 addons.go:69] Setting storage-provisioner=true in profile "addons-566823"
	I0723 13:58:21.729562   19502 addons.go:234] Setting addon inspektor-gadget=true in "addons-566823"
	I0723 13:58:21.729566   19502 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-566823"
	I0723 13:58:21.729584   19502 addons.go:69] Setting registry=true in profile "addons-566823"
	I0723 13:58:21.729576   19502 addons.go:69] Setting volcano=true in profile "addons-566823"
	I0723 13:58:21.729603   19502 addons.go:234] Setting addon registry=true in "addons-566823"
	I0723 13:58:21.729609   19502 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-566823"
	I0723 13:58:21.729616   19502 addons.go:69] Setting metrics-server=true in profile "addons-566823"
	I0723 13:58:21.729622   19502 addons.go:234] Setting addon volcano=true in "addons-566823"
	I0723 13:58:21.729625   19502 addons.go:69] Setting helm-tiller=true in profile "addons-566823"
	I0723 13:58:21.729637   19502 addons.go:234] Setting addon metrics-server=true in "addons-566823"
	I0723 13:58:21.729640   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729650   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729657   19502 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-566823"
	I0723 13:58:21.729661   19502 addons.go:69] Setting gcp-auth=true in profile "addons-566823"
	I0723 13:58:21.729667   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729675   19502 mustload.go:65] Loading cluster: addons-566823
	I0723 13:58:21.729677   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729600   19502 config.go:182] Loaded profile config "addons-566823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 13:58:21.729604   19502 addons.go:69] Setting cloud-spanner=true in profile "addons-566823"
	I0723 13:58:21.729804   19502 addons.go:234] Setting addon cloud-spanner=true in "addons-566823"
	I0723 13:58:21.729828   19502 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-566823"
	I0723 13:58:21.729855   19502 addons.go:69] Setting ingress=true in profile "addons-566823"
	I0723 13:58:21.729874   19502 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-566823"
	I0723 13:58:21.729881   19502 config.go:182] Loaded profile config "addons-566823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 13:58:21.729895   19502 addons.go:234] Setting addon ingress=true in "addons-566823"
	I0723 13:58:21.729922   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.729953   19502 addons.go:69] Setting ingress-dns=true in profile "addons-566823"
	I0723 13:58:21.729981   19502 addons.go:234] Setting addon ingress-dns=true in "addons-566823"
	I0723 13:58:21.729990   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730005   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.730026   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730031   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730031   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730028   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730063   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730088   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730120   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.729833   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.730223   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730279   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730317   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730351   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730364   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730406   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.729597   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.730533   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730558   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730071   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.729930   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729575   19502 addons.go:234] Setting addon storage-provisioner=true in "addons-566823"
	I0723 13:58:21.729652   19502 addons.go:69] Setting default-storageclass=true in profile "addons-566823"
	I0723 13:58:21.731125   19502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-566823"
	I0723 13:58:21.729604   19502 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-566823"
	I0723 13:58:21.731199   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729943   19502 addons.go:69] Setting volumesnapshots=true in profile "addons-566823"
	I0723 13:58:21.731324   19502 addons.go:234] Setting addon volumesnapshots=true in "addons-566823"
	I0723 13:58:21.731365   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.731508   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.731532   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.731604   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.731639   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.731677   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.731693   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.731131   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729644   19502 addons.go:234] Setting addon helm-tiller=true in "addons-566823"
	I0723 13:58:21.732027   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.732087   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.732118   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.742498   19502 out.go:177] * Verifying Kubernetes components...
	I0723 13:58:21.744203   19502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 13:58:21.751122   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0723 13:58:21.751612   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I0723 13:58:21.751814   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.752225   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.752774   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.752792   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.752965   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.752998   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.753070   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I0723 13:58:21.753567   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.753633   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.753690   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.754177   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.754194   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.754257   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.754290   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.754937   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.754971   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.755123   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.755718   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I0723 13:58:21.760804   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0723 13:58:21.761285   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.761852   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.761878   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.762264   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.762455   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.762911   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I0723 13:58:21.763418   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.763964   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.763984   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.764424   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.764595   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.764901   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.764929   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.765020   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.766674   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.766698   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.766798   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.766823   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.767115   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.767139   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.767333   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.767369   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.767972   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.769049   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.769068   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.769133   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0723 13:58:21.771008   19502 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-566823"
	I0723 13:58:21.771050   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.771408   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.771425   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.771709   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.771806   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.772027   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.772902   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.772918   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.774628   19502 addons.go:234] Setting addon default-storageclass=true in "addons-566823"
	I0723 13:58:21.774667   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.775005   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.775032   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.775555   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.776087   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.776119   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.785003   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0723 13:58:21.814755   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.814942   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I0723 13:58:21.815078   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0723 13:58:21.815329   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0723 13:58:21.815421   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I0723 13:58:21.815619   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0723 13:58:21.815755   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0723 13:58:21.815887   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41737
	I0723 13:58:21.816222   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0723 13:58:21.816289   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0723 13:58:21.816405   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0723 13:58:21.816502   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0723 13:58:21.816850   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.816935   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817006   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817073   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817414   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817434   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817580   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817588   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817705   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817714   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817766   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817894   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817903   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817951   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.818435   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.818585   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.818769   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.818781   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.818915   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.818933   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.819086   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.819122   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.819180   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.819224   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.819305   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.819977   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.820086   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.820153   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.820167   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.820193   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.820266   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.820320   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.820273   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.820374   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.820376   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.820283   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.820421   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.820662   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.820688   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.821111   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.821212   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.821266   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.821298   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.821412   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.821433   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.821497   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.821673   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.821687   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.821772   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.821797   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.821976   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.822008   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.822602   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.822770   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.822833   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.822883   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.822928   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.823055   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.823066   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.824349   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.824356   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0723 13:58:21.824878   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.824917   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.825750   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.826874   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.826904   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.827015   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0723 13:58:21.827356   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.827378   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.827420   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.827899   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.827958   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.828027   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.828058   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.828805   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.829390   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.829571   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I0723 13:58:21.829754   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.829939   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0723 13:58:21.830202   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.830857   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.831188   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.832533   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.833453   19502 out.go:177]   - Using image docker.io/registry:2.8.3
	I0723 13:58:21.833569   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.833591   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.833908   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0723 13:58:21.834873   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.835492   19502 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0723 13:58:21.836567   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0723 13:58:21.836797   19502 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0723 13:58:21.836809   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0723 13:58:21.836826   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.839914   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0723 13:58:21.841432   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.841613   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.841781   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.842007   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.843334   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.843363   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.843384   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.843651   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0723 13:58:21.844501   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I0723 13:58:21.844882   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.845025   19502 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0723 13:58:21.845364   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.845387   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.845734   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.846269   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0723 13:58:21.846275   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.846312   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.846493   19502 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 13:58:21.846510   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0723 13:58:21.846529   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.849798   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.850095   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0723 13:58:21.850241   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.850276   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.850466   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.850624   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.850796   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.850943   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.851836   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0723 13:58:21.851852   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0723 13:58:21.851866   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.852215   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0723 13:58:21.852837   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.853617   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.853638   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.854437   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.854713   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.855296   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.858535   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.858549   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.858566   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.858816   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.858978   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.859117   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.862687   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.865002   19502 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0723 13:58:21.865425   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0723 13:58:21.865864   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.866339   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.866355   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.866468   19502 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0723 13:58:21.866481   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0723 13:58:21.866496   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.867106   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.867849   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.868235   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0723 13:58:21.868723   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.869363   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.869379   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.869982   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.870132   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.870267   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.871818   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.872267   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.872260   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.872291   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.872432   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.872589   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.872733   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.872867   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.874065   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0723 13:58:21.874368   19502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 13:58:21.874430   19502 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0723 13:58:21.875214   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.875723   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.875740   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.876106   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.876297   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.876412   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0723 13:58:21.876712   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.877138   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.877155   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.877451   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.877622   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.877788   19502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 13:58:21.877810   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 13:58:21.877826   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.877904   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.878334   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0723 13:58:21.878351   19502 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0723 13:58:21.878367   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.881125   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.881435   19502 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0723 13:58:21.881631   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I0723 13:58:21.882015   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.882433   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.882589   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.882607   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.882682   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.882709   19502 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0723 13:58:21.882732   19502 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0723 13:58:21.882756   19502 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0723 13:58:21.882772   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.883077   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.883152   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.883461   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.883481   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.883538   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.883780   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.883842   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.883890   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40261
	I0723 13:58:21.884022   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.884189   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 13:58:21.884203   19502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 13:58:21.884220   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.884341   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.884942   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.885068   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.885077   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.885295   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.885362   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.885477   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.885661   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.885738   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45415
	I0723 13:58:21.885784   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.886274   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.886405   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.886683   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.886806   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.886824   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.887260   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.887273   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.887644   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.887717   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.889106   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.889125   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.889316   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.889374   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.889559   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.889758   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0723 13:58:21.889822   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.890020   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.890276   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.890293   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.890333   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.890417   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.890422   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0723 13:58:21.890661   19502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 13:58:21.890679   19502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 13:58:21.890694   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.890748   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.890898   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.891050   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.891356   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.891508   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0723 13:58:21.891524   19502 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0723 13:58:21.891540   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.891589   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.892194   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.892214   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.892606   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.892803   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.893006   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 13:58:21.893856   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42233
	I0723 13:58:21.894305   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.894487   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.894620   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.894945   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.894976   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.895297   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.895106   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0723 13:58:21.895204   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.895493   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.895509   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.895590   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.895627   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.895715   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.895724   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 13:58:21.895833   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.896019   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.896119   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.896127   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.896258   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.896384   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.896563   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.896590   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.896663   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.897087   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.897100   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.897112   19502 out.go:177]   - Using image docker.io/busybox:stable
	I0723 13:58:21.897494   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.897718   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.898334   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0723 13:58:21.898897   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.899133   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:21.899149   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:21.899306   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:21.899319   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:21.899328   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:21.899334   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:21.899492   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:21.899506   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	W0723 13:58:21.899579   19502 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0723 13:58:21.899772   19502 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 13:58:21.899788   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0723 13:58:21.899800   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.900198   19502 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0723 13:58:21.901623   19502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 13:58:21.901637   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0723 13:58:21.901647   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.901748   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.902916   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.903305   19502 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0723 13:58:21.903471   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.903489   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.903595   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.903783   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.903925   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.904094   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.904898   19502 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0723 13:58:21.904911   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0723 13:58:21.904922   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.904995   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.905362   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.905391   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.905685   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.905884   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.906025   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.906151   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.907492   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.907794   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.907816   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.907853   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0723 13:58:21.908009   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.908176   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.908187   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.908269   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.908349   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.908835   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.908856   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	W0723 13:58:21.908983   19502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39210->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:21.909002   19502 retry.go:31] will retry after 169.494817ms: ssh: handshake failed: read tcp 192.168.39.1:39210->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:21.909315   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.909472   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.911153   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.913154   19502 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0723 13:58:21.914697   19502 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 13:58:21.914716   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0723 13:58:21.914733   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.917114   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.917414   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.917445   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.917662   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.917835   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.917970   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.918076   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	W0723 13:58:21.926120   19502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39224->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:21.926150   19502 retry.go:31] will retry after 313.981963ms: ssh: handshake failed: read tcp 192.168.39.1:39224->192.168.39.114:22: read: connection reset by peer
	W0723 13:58:22.079639   19502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39236->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:22.079665   19502 retry.go:31] will retry after 539.540893ms: ssh: handshake failed: read tcp 192.168.39.1:39236->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:22.202176   19502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 13:58:22.202245   19502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0723 13:58:22.230801   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 13:58:22.311235   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 13:58:22.311270   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0723 13:58:22.324790   19502 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0723 13:58:22.324818   19502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0723 13:58:22.327779   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 13:58:22.345388   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0723 13:58:22.345414   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0723 13:58:22.355748   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0723 13:58:22.372003   19502 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0723 13:58:22.372029   19502 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0723 13:58:22.376325   19502 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0723 13:58:22.376350   19502 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0723 13:58:22.437458   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 13:58:22.448940   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 13:58:22.448965   19502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 13:58:22.479853   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 13:58:22.481228   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0723 13:58:22.481247   19502 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0723 13:58:22.485486   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 13:58:22.518685   19502 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0723 13:58:22.518710   19502 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0723 13:58:22.534869   19502 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0723 13:58:22.534888   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0723 13:58:22.546789   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0723 13:58:22.546816   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0723 13:58:22.575320   19502 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0723 13:58:22.575344   19502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0723 13:58:22.602410   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 13:58:22.657250   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 13:58:22.657271   19502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 13:58:22.674093   19502 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0723 13:58:22.674117   19502 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0723 13:58:22.676065   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0723 13:58:22.676082   19502 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0723 13:58:22.701679   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0723 13:58:22.785069   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0723 13:58:22.785098   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0723 13:58:22.815135   19502 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0723 13:58:22.815172   19502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0723 13:58:22.958101   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0723 13:58:22.958124   19502 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0723 13:58:22.966549   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 13:58:22.978592   19502 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0723 13:58:22.978613   19502 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0723 13:58:23.024720   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0723 13:58:23.024744   19502 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0723 13:58:23.100506   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0723 13:58:23.100540   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0723 13:58:23.157324   19502 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0723 13:58:23.157349   19502 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0723 13:58:23.248309   19502 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 13:58:23.248337   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0723 13:58:23.250488   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0723 13:58:23.250510   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0723 13:58:23.360492   19502 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0723 13:58:23.360512   19502 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0723 13:58:23.429353   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0723 13:58:23.429382   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0723 13:58:23.465635   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0723 13:58:23.524673   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 13:58:23.546884   19502 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0723 13:58:23.546911   19502 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0723 13:58:23.633648   19502 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0723 13:58:23.633676   19502 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0723 13:58:23.647703   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0723 13:58:23.647725   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0723 13:58:23.769358   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0723 13:58:23.866316   19502 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 13:58:23.866350   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0723 13:58:23.867719   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0723 13:58:23.867735   19502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0723 13:58:23.953525   19502 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.751244478s)
	I0723 13:58:23.953562   19502 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0723 13:58:23.953563   19502 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.751356542s)
	I0723 13:58:23.954217   19502 node_ready.go:35] waiting up to 6m0s for node "addons-566823" to be "Ready" ...
	I0723 13:58:23.961880   19502 node_ready.go:49] node "addons-566823" has status "Ready":"True"
	I0723 13:58:23.961905   19502 node_ready.go:38] duration metric: took 7.623495ms for node "addons-566823" to be "Ready" ...
	I0723 13:58:23.961912   19502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 13:58:23.994410   19502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:24.072223   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 13:58:24.167818   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0723 13:58:24.167842   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0723 13:58:24.401655   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0723 13:58:24.401680   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0723 13:58:24.457418   19502 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-566823" context rescaled to 1 replicas
	I0723 13:58:24.752279   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 13:58:24.752312   19502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0723 13:58:25.069272   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 13:58:26.146944   19502 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace has status "Ready":"False"
	I0723 13:58:26.374629   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.143794582s)
	I0723 13:58:26.374675   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.374686   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.374733   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.046925928s)
	I0723 13:58:26.374770   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.018998287s)
	I0723 13:58:26.374778   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.374787   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.374790   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.374795   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375101   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375167   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375176   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375184   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.375185   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375194   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375194   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.375243   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375647   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375664   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375679   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375678   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375688   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375693   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375696   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375705   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375714   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375722   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.375729   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375969   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375996   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.376004   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.619278   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.619301   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.619685   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.619707   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:28.620840   19502 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.620872   19502 pod_ready.go:81] duration metric: took 4.626433715s for pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.620885   19502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhdm4" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.736222   19502 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhdm4" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.736254   19502 pod_ready.go:81] duration metric: took 115.361023ms for pod "coredns-7db6d8ff4d-jhdm4" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.736266   19502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.806591   19502 pod_ready.go:92] pod "etcd-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.806617   19502 pod_ready.go:81] duration metric: took 70.343575ms for pod "etcd-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.806631   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.859260   19502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0723 13:58:28.859310   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:28.862421   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:28.862967   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:28.862999   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:28.863138   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:28.863353   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:28.863546   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:28.863683   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:28.890591   19502 pod_ready.go:92] pod "kube-apiserver-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.890616   19502 pod_ready.go:81] duration metric: took 83.97937ms for pod "kube-apiserver-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.890627   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.935088   19502 pod_ready.go:92] pod "kube-controller-manager-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.935111   19502 pod_ready.go:81] duration metric: took 44.475142ms for pod "kube-controller-manager-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.935125   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhm7l" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.980609   19502 pod_ready.go:92] pod "kube-proxy-dhm7l" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.980630   19502 pod_ready.go:81] duration metric: took 45.499372ms for pod "kube-proxy-dhm7l" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.980640   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:29.136374   19502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0723 13:58:29.208595   19502 addons.go:234] Setting addon gcp-auth=true in "addons-566823"
	I0723 13:58:29.208655   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:29.209090   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:29.209134   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:29.224251   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0723 13:58:29.224692   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:29.225164   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:29.225181   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:29.225541   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:29.225997   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:29.226021   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:29.242031   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I0723 13:58:29.242487   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:29.242959   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:29.242981   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:29.243280   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:29.243490   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:29.245040   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:29.245261   19502 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0723 13:58:29.245287   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:29.248182   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:29.248644   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:29.248674   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:29.248828   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:29.249043   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:29.249209   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:29.249406   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:29.305516   19502 pod_ready.go:92] pod "kube-scheduler-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:29.305545   19502 pod_ready.go:81] duration metric: took 324.897765ms for pod "kube-scheduler-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:29.305555   19502 pod_ready.go:38] duration metric: took 5.343633359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 13:58:29.305574   19502 api_server.go:52] waiting for apiserver process to appear ...
	I0723 13:58:29.305651   19502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 13:58:29.853104   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.415607729s)
	I0723 13:58:29.853144   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.373261095s)
	I0723 13:58:29.853161   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853173   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853181   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853201   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853288   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.367769604s)
	I0723 13:58:29.853319   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853335   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853372   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.151664239s)
	I0723 13:58:29.853402   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853419   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853321   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.250882474s)
	I0723 13:58:29.853502   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853517   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853521   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.886943599s)
	I0723 13:58:29.853544   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853562   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853642   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.387973032s)
	I0723 13:58:29.853674   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853684   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853760   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.329048662s)
	I0723 13:58:29.853777   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.853793   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	W0723 13:58:29.853802   19502 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 13:58:29.853821   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.853833   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.853840   19502 retry.go:31] will retry after 174.923181ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 13:58:29.853844   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853865   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.853873   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853876   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.853927   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.853935   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853946   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853967   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.853992   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.853999   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854006   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854013   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.854041   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.084651231s)
	I0723 13:58:29.854063   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854061   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854070   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854074   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.854079   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854085   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853909   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854117   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854125   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854133   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.854311   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.854341   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854352   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854360   19502 addons.go:475] Verifying addon registry=true in "addons-566823"
	I0723 13:58:29.854589   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854607   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854614   19502 addons.go:475] Verifying addon ingress=true in "addons-566823"
	I0723 13:58:29.854717   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854728   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854900   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.854924   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854932   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.855318   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.783057659s)
	I0723 13:58:29.855359   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.855369   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.855487   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.855518   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.855525   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.855540   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.855553   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.856106   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.856136   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856144   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856549   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856568   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856577   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.856596   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.856680   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856693   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856701   19502 addons.go:475] Verifying addon metrics-server=true in "addons-566823"
	I0723 13:58:29.856837   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.856864   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856870   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856905   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.856940   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856951   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856968   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.856975   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.857036   19502 out.go:177] * Verifying ingress addon...
	I0723 13:58:29.857064   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.857075   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.857084   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.857092   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.857186   19502 out.go:177] * Verifying registry addon...
	I0723 13:58:29.857344   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.857367   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.857799   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.858997   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.859019   19502 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-566823 service yakd-dashboard -n yakd-dashboard
	
	I0723 13:58:29.859024   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.859085   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.860000   19502 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0723 13:58:29.860148   19502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0723 13:58:29.891120   19502 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0723 13:58:29.891144   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:29.891218   19502 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0723 13:58:29.891237   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:29.912473   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.912492   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.912768   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.912783   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.912809   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:30.029357   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 13:58:30.365800   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:30.367052   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:30.867393   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:30.867935   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:31.123139   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.053819252s)
	I0723 13:58:31.123187   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.123195   19502 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.817516334s)
	I0723 13:58:31.123225   19502 api_server.go:72] duration metric: took 9.393919744s to wait for apiserver process to appear ...
	I0723 13:58:31.123236   19502 api_server.go:88] waiting for apiserver healthz status ...
	I0723 13:58:31.123238   19502 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.877955247s)
	I0723 13:58:31.123256   19502 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0723 13:58:31.123201   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.123737   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.123752   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.123756   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:31.123760   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.123849   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.124133   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.124149   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.124160   19502 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-566823"
	I0723 13:58:31.124866   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 13:58:31.125760   19502 out.go:177] * Verifying csi-hostpath-driver addon...
	I0723 13:58:31.127664   19502 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0723 13:58:31.128318   19502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0723 13:58:31.129235   19502 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0723 13:58:31.129255   19502 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0723 13:58:31.137062   19502 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0723 13:58:31.144990   19502 api_server.go:141] control plane version: v1.30.3
	I0723 13:58:31.145023   19502 api_server.go:131] duration metric: took 21.779021ms to wait for apiserver health ...
	I0723 13:58:31.145033   19502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 13:58:31.166060   19502 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 13:58:31.166080   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:31.175064   19502 system_pods.go:59] 19 kube-system pods found
	I0723 13:58:31.175103   19502 system_pods.go:61] "coredns-7db6d8ff4d-4zjr6" [44af35b9-1b02-4ea2-ae0c-edc96976f89a] Running
	I0723 13:58:31.175109   19502 system_pods.go:61] "coredns-7db6d8ff4d-jhdm4" [fa9b7640-f730-448e-942f-44fd0788921e] Running
	I0723 13:58:31.175116   19502 system_pods.go:61] "csi-hostpath-attacher-0" [69259ffc-bf8b-4c26-bfa8-e06e26e990eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0723 13:58:31.175121   19502 system_pods.go:61] "csi-hostpath-resizer-0" [8af26a5d-3cc4-4627-b99f-49f1153b5fac] Pending
	I0723 13:58:31.175131   19502 system_pods.go:61] "csi-hostpathplugin-gnjgh" [0d878af2-8cec-4825-910d-8eb02e65b9ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0723 13:58:31.175153   19502 system_pods.go:61] "etcd-addons-566823" [009c1e05-4ba2-4525-bca9-2834a1b4a836] Running
	I0723 13:58:31.175161   19502 system_pods.go:61] "kube-apiserver-addons-566823" [f8a4a022-c913-4db5-ad61-304ee63f66a7] Running
	I0723 13:58:31.175166   19502 system_pods.go:61] "kube-controller-manager-addons-566823" [32f9ec49-5bb3-45f4-8f86-969feb94d86e] Running
	I0723 13:58:31.175174   19502 system_pods.go:61] "kube-ingress-dns-minikube" [03cc5ad6-8256-43b3-b473-93939d6d75cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0723 13:58:31.175181   19502 system_pods.go:61] "kube-proxy-dhm7l" [9cf78545-7300-4f1a-a947-7459b858880d] Running
	I0723 13:58:31.175185   19502 system_pods.go:61] "kube-scheduler-addons-566823" [6e151043-406d-40fb-bc07-f56affe614fa] Running
	I0723 13:58:31.175191   19502 system_pods.go:61] "metrics-server-c59844bb4-f52cd" [6b45f2b1-e48c-4097-aa53-5c2f5fea4806] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 13:58:31.175200   19502 system_pods.go:61] "nvidia-device-plugin-daemonset-ntcgv" [fa2530a9-7fcd-4a19-bde9-4a8e1607e1e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0723 13:58:31.175209   19502 system_pods.go:61] "registry-656c9c8d9c-4gvbc" [191b0c30-0add-4831-9cb0-de8b776cedc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0723 13:58:31.175215   19502 system_pods.go:61] "registry-proxy-4b47m" [02461034-b1da-43d3-8017-4b96ba1b9c2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0723 13:58:31.175224   19502 system_pods.go:61] "snapshot-controller-745499f584-hw5vj" [93d07ee6-b8df-4528-9996-a505db12639b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.175237   19502 system_pods.go:61] "snapshot-controller-745499f584-r8tcx" [cd8e271a-0a4e-4404-afdc-402eb6bd57ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.175243   19502 system_pods.go:61] "storage-provisioner" [bd28f68d-bdb2-47cf-8029-1043b5280270] Running
	I0723 13:58:31.175255   19502 system_pods.go:61] "tiller-deploy-6677d64bcd-598dj" [98da9631-ad0b-4406-b5c6-c709e679ab9d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0723 13:58:31.175271   19502 system_pods.go:74] duration metric: took 30.23144ms to wait for pod list to return data ...
	I0723 13:58:31.175287   19502 default_sa.go:34] waiting for default service account to be created ...
	I0723 13:58:31.189011   19502 default_sa.go:45] found service account: "default"
	I0723 13:58:31.189038   19502 default_sa.go:55] duration metric: took 13.741176ms for default service account to be created ...
	I0723 13:58:31.189051   19502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 13:58:31.206724   19502 system_pods.go:86] 19 kube-system pods found
	I0723 13:58:31.206749   19502 system_pods.go:89] "coredns-7db6d8ff4d-4zjr6" [44af35b9-1b02-4ea2-ae0c-edc96976f89a] Running
	I0723 13:58:31.206755   19502 system_pods.go:89] "coredns-7db6d8ff4d-jhdm4" [fa9b7640-f730-448e-942f-44fd0788921e] Running
	I0723 13:58:31.206762   19502 system_pods.go:89] "csi-hostpath-attacher-0" [69259ffc-bf8b-4c26-bfa8-e06e26e990eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0723 13:58:31.206769   19502 system_pods.go:89] "csi-hostpath-resizer-0" [8af26a5d-3cc4-4627-b99f-49f1153b5fac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0723 13:58:31.206777   19502 system_pods.go:89] "csi-hostpathplugin-gnjgh" [0d878af2-8cec-4825-910d-8eb02e65b9ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0723 13:58:31.206782   19502 system_pods.go:89] "etcd-addons-566823" [009c1e05-4ba2-4525-bca9-2834a1b4a836] Running
	I0723 13:58:31.206787   19502 system_pods.go:89] "kube-apiserver-addons-566823" [f8a4a022-c913-4db5-ad61-304ee63f66a7] Running
	I0723 13:58:31.206791   19502 system_pods.go:89] "kube-controller-manager-addons-566823" [32f9ec49-5bb3-45f4-8f86-969feb94d86e] Running
	I0723 13:58:31.206799   19502 system_pods.go:89] "kube-ingress-dns-minikube" [03cc5ad6-8256-43b3-b473-93939d6d75cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0723 13:58:31.206805   19502 system_pods.go:89] "kube-proxy-dhm7l" [9cf78545-7300-4f1a-a947-7459b858880d] Running
	I0723 13:58:31.206810   19502 system_pods.go:89] "kube-scheduler-addons-566823" [6e151043-406d-40fb-bc07-f56affe614fa] Running
	I0723 13:58:31.206817   19502 system_pods.go:89] "metrics-server-c59844bb4-f52cd" [6b45f2b1-e48c-4097-aa53-5c2f5fea4806] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 13:58:31.206823   19502 system_pods.go:89] "nvidia-device-plugin-daemonset-ntcgv" [fa2530a9-7fcd-4a19-bde9-4a8e1607e1e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0723 13:58:31.206831   19502 system_pods.go:89] "registry-656c9c8d9c-4gvbc" [191b0c30-0add-4831-9cb0-de8b776cedc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0723 13:58:31.206839   19502 system_pods.go:89] "registry-proxy-4b47m" [02461034-b1da-43d3-8017-4b96ba1b9c2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0723 13:58:31.206848   19502 system_pods.go:89] "snapshot-controller-745499f584-hw5vj" [93d07ee6-b8df-4528-9996-a505db12639b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.206857   19502 system_pods.go:89] "snapshot-controller-745499f584-r8tcx" [cd8e271a-0a4e-4404-afdc-402eb6bd57ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.206863   19502 system_pods.go:89] "storage-provisioner" [bd28f68d-bdb2-47cf-8029-1043b5280270] Running
	I0723 13:58:31.206869   19502 system_pods.go:89] "tiller-deploy-6677d64bcd-598dj" [98da9631-ad0b-4406-b5c6-c709e679ab9d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0723 13:58:31.206878   19502 system_pods.go:126] duration metric: took 17.819593ms to wait for k8s-apps to be running ...
	I0723 13:58:31.206888   19502 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 13:58:31.206929   19502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 13:58:31.244583   19502 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0723 13:58:31.244612   19502 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0723 13:58:31.306864   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.27745784s)
	I0723 13:58:31.306907   19502 system_svc.go:56] duration metric: took 100.010856ms WaitForService to wait for kubelet
	I0723 13:58:31.306927   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.306943   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.306935   19502 kubeadm.go:582] duration metric: took 9.577629013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 13:58:31.306965   19502 node_conditions.go:102] verifying NodePressure condition ...
	I0723 13:58:31.307294   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.307313   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.307332   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.307344   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.307576   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:31.307617   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.307629   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.310204   19502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 13:58:31.310234   19502 node_conditions.go:123] node cpu capacity is 2
	I0723 13:58:31.310248   19502 node_conditions.go:105] duration metric: took 3.276395ms to run NodePressure ...
	I0723 13:58:31.310260   19502 start.go:241] waiting for startup goroutines ...
	I0723 13:58:31.329454   19502 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 13:58:31.329472   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0723 13:58:31.367326   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:31.368802   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:31.378984   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 13:58:31.634056   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:31.865237   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:31.866021   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:32.132939   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:32.388766   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:32.388872   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:32.494534   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.115508523s)
	I0723 13:58:32.494592   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:32.494609   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:32.494886   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:32.494905   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:32.494913   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:32.494923   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:32.494936   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:32.495131   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:32.495186   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:32.495202   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:32.496910   19502 addons.go:475] Verifying addon gcp-auth=true in "addons-566823"
	I0723 13:58:32.498867   19502 out.go:177] * Verifying gcp-auth addon...
	I0723 13:58:32.501027   19502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0723 13:58:32.520469   19502 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0723 13:58:32.520497   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:32.659478   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:32.865972   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:32.866095   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:33.006836   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:33.134100   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:33.370457   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:33.370928   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:33.506041   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:33.634493   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:33.868723   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:33.868930   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:34.005258   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:34.134103   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:34.365342   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:34.366990   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:34.505036   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:34.634624   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:34.864780   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:34.865202   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:35.005694   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:35.133706   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:35.365032   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:35.366599   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:35.506125   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:35.633733   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:35.864869   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:35.865243   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:36.005346   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:36.134524   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:36.589479   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:36.590482   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:36.590776   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:36.634932   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:36.865903   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:36.866132   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:37.008060   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:37.134155   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:37.428556   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:37.429560   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:37.505224   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:37.633961   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:37.865381   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:37.866342   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:38.005067   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:38.133822   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:38.364857   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:38.366568   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:38.506228   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:38.633319   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:39.040865   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:39.041089   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:39.044975   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:39.133910   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:39.365010   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:39.365087   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:39.510776   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:39.634312   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:39.864769   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:39.865657   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:40.004562   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:40.135266   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:40.364798   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:40.364964   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:40.505729   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:40.633852   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:40.864930   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:40.865543   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:41.004956   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:41.133488   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:41.365168   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:41.365220   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:41.506309   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:41.634466   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:41.866993   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:41.867127   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:42.005414   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:42.134202   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:42.365129   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:42.365135   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:42.506148   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:42.633941   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:42.864162   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:42.864303   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:43.005502   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:43.134807   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:43.365907   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:43.366162   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:43.505185   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:43.635747   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:43.863952   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:43.865544   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:44.005336   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:44.134225   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:44.365026   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:44.366707   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:44.504677   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:44.634555   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:44.864959   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:44.866361   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:45.012977   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:45.133916   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:45.365518   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:45.365795   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:45.504404   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:45.634271   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:45.865279   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:45.865688   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:46.004860   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:46.133651   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:46.365572   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:46.365874   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:46.506368   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:46.634669   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:46.866161   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:46.866661   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:47.005055   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:47.134519   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:47.369481   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:47.371941   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:47.505034   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:47.635357   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:47.864812   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:47.865218   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:48.011214   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:48.134098   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:48.364180   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:48.365207   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:48.505648   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:48.633420   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:48.865240   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:48.867566   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:49.004608   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:49.134532   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:49.365342   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:49.365444   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:49.505372   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:49.636738   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:49.901866   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:49.902039   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:50.005376   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:50.134244   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:50.365604   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:50.368275   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:50.505613   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:50.633852   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:50.864338   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:50.865595   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:51.004958   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:51.133930   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:51.365353   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:51.365758   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:51.505822   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:51.634550   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:51.863853   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:51.865464   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:52.004724   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:52.133989   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:52.364893   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:52.366262   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:52.505747   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:52.633350   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:52.864563   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:52.867114   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:53.005625   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:53.134265   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:53.365146   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:53.365940   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:53.505385   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:53.634547   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:53.865082   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:53.865669   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:54.005116   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:54.133562   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:54.366115   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:54.368343   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:54.504045   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:54.633927   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:54.867402   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:54.867627   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:55.005110   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:55.134107   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:55.368168   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:55.369936   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:55.504795   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:55.633423   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:55.864906   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:55.865019   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:56.005251   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:56.134082   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:56.365073   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:56.365625   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:56.505758   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:56.634042   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:56.864135   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:56.864181   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:57.004505   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:57.135089   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:57.368997   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:57.369789   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:57.505130   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:57.633955   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:57.866329   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:57.866525   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:58.005056   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:58.134054   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:58.365155   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:58.368063   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:58.504959   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:58.634133   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:58.866299   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:58.866300   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:59.005040   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:59.133961   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:59.365122   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:59.365485   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:59.574645   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:59.634419   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:59.866537   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:59.866909   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:00.004281   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:00.136261   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:00.366991   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:00.367672   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:00.505823   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:00.642289   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:00.864816   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:00.867919   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:01.008626   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:01.134062   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:01.366213   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:01.366751   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:01.505397   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:01.636186   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:01.864487   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:01.866324   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:02.005666   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:02.134741   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:02.366733   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:02.368375   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:02.505103   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:02.648636   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:02.867244   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:02.867428   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:03.009148   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:03.134123   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:03.364760   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:03.366193   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:03.505617   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:03.633999   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:03.864590   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:03.864910   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:04.004882   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:04.134097   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:04.364739   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:04.364806   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:04.505568   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:04.633263   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:04.866520   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:04.866877   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:05.005195   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:05.134266   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:05.365826   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:05.366029   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:05.504567   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:05.633254   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:05.864748   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:05.868327   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:06.005315   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:06.134267   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:06.364402   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:06.364958   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:06.505403   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:06.721052   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:07.071523   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:07.071827   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:07.072491   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:07.133709   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:07.364693   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:07.364979   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:07.504899   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:07.635024   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:07.887523   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:07.888231   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:08.005670   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:08.134160   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:08.366564   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:08.366711   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:08.504858   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:08.633698   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:08.864373   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:08.864811   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:09.005319   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:09.134102   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:09.365052   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:09.365931   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:09.505160   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:09.635735   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:09.864409   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:09.864859   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:10.005379   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:10.137590   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:10.365481   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:10.366626   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:10.507309   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:10.634017   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:10.866521   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:10.872107   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:11.005333   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:11.134256   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:11.364951   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:11.365479   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:11.509190   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:11.634144   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:11.865091   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:11.865689   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:12.005089   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:12.133615   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:12.366042   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:12.366716   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:12.505189   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:12.634216   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:12.865076   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:12.866370   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:13.009333   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:13.134814   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:13.364678   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:13.365714   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:13.505874   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:13.634194   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:13.866119   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:13.866956   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:14.004446   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:14.134056   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:14.365092   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:14.366629   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:14.505155   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:14.634412   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:14.864186   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:14.864648   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:15.006185   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:15.140078   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:15.365616   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:15.367152   19502 kapi.go:107] duration metric: took 45.507001664s to wait for kubernetes.io/minikube-addons=registry ...
	I0723 13:59:15.506531   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:15.634956   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:15.864278   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:16.005256   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:16.134826   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:16.364629   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:16.504599   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:16.633522   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:16.865502   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:17.005372   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:17.136528   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:17.365232   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:17.505777   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:17.633476   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:17.865076   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:18.006520   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:18.134546   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:18.380631   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:18.621103   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:18.633718   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:18.864170   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:19.004914   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:19.137097   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:19.364110   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:19.505129   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:19.634016   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:19.867361   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:20.007489   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:20.134449   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:20.365034   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:20.504397   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:20.634139   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:20.864022   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:21.004098   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:21.133656   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:21.364880   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:21.509704   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:21.633240   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:21.864114   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:22.004951   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:22.133486   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:22.364721   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:22.507347   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:22.634252   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:22.864760   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:23.004490   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:23.133471   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:23.365026   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:23.507826   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:23.634170   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:24.185146   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:24.186664   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:24.186811   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:24.364979   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:24.504984   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:24.633987   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:24.864394   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:25.005547   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:25.134567   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:25.364647   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:25.506667   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:25.633469   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:25.865518   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:26.005595   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:26.135663   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:26.368712   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:26.506341   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:26.634820   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:26.863607   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:27.004109   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:27.134283   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:27.375890   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:27.512006   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:27.634754   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:27.865250   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:28.004892   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:28.133752   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:28.363768   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:28.504162   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:28.633753   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:28.864882   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:29.004658   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:29.134183   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:29.364911   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:29.511409   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:29.634544   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:29.864506   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:30.005225   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:30.133944   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:30.364834   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:30.511024   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:30.638337   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:30.869754   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:31.007236   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:31.135766   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:31.364185   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:31.505660   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:31.633679   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:31.993360   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:32.006154   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:32.134859   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:32.364344   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:32.504976   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:32.634076   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:32.864566   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:33.004674   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:33.133085   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:33.364094   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:33.505119   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:33.634542   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:33.865956   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:34.004619   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:34.134564   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:34.364734   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:34.504619   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:34.634779   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:34.864253   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:35.008601   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:35.135009   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:35.364952   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:35.504846   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:35.634049   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:35.864092   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:36.006022   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:36.134218   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:36.364762   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:36.505840   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:36.633880   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:36.865020   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:37.005035   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:37.133818   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:37.364293   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:37.504573   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:37.635633   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:37.864923   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:38.005492   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:38.134344   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:38.571143   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:38.571997   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:38.765989   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:38.869482   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:39.004665   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:39.134033   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:39.364542   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:39.508407   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:39.635034   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:39.863975   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:40.005536   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:40.133130   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:40.364230   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:40.515202   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:40.638359   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:41.256677   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:41.257363   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:41.257353   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:41.364831   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:41.507674   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:41.633287   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:41.864568   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:42.004874   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:42.133353   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:42.364498   19502 kapi.go:107] duration metric: took 1m12.504496508s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0723 13:59:42.505795   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:42.636029   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:43.006053   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:43.137737   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:43.504915   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:43.634504   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:44.004073   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:44.134172   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:44.505043   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:44.634472   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:45.005215   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:45.134315   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:45.506210   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:45.634287   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:46.005325   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:46.134183   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:46.506967   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:46.634898   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:47.004497   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:47.151519   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:47.505439   19502 kapi.go:107] duration metric: took 1m15.004410647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0723 13:59:47.507246   19502 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-566823 cluster.
	I0723 13:59:47.508643   19502 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0723 13:59:47.510029   19502 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0723 13:59:47.633920   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:48.346054   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:48.633904   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:49.135686   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:49.634018   19502 kapi.go:107] duration metric: took 1m18.505696308s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0723 13:59:49.635779   19502 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner-rancher, nvidia-device-plugin, helm-tiller, metrics-server, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0723 13:59:49.637140   19502 addons.go:510] duration metric: took 1m27.907849923s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner-rancher nvidia-device-plugin helm-tiller metrics-server storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0723 13:59:49.637180   19502 start.go:246] waiting for cluster config update ...
	I0723 13:59:49.637200   19502 start.go:255] writing updated cluster config ...
	I0723 13:59:49.637447   19502 ssh_runner.go:195] Run: rm -f paused
	I0723 13:59:49.688312   19502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 13:59:49.690300   19502 out.go:177] * Done! kubectl is now configured to use "addons-566823" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.175739399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743373175708903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0809396-d363-43e9-aaec-f11f59d7c659 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.176422543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40bc12f3-923f-43a3-9982-95791f5aad36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.176490420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40bc12f3-923f-43a3-9982-95791f5aad36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.176833987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f2e7cd4acf69859cbb9d4b96d13f525f9c11fa949d6dfe4df073711fc3c5f8,PodSandboxId:1fc04fea3dea4a041d0b34a9eef844ff109a6b9a07622e1d8e8a19c2fa031697,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721743164433140166,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmx6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d13fc5ba-4f88-4bd7-a441-2e4a43e83c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdad86e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adabfbbeaadf03ff256b579687b337ff5fe670a96d3485d8552a29c95e2fda5e,PodSandboxId:9b914e9db15ff5915e378ef6f36c7158522acbb705eb33614590b5287eaffc1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721743164284877420,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p6zcr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf5b3688-d897-4236-b360-2974b847e300,},Annotations:map[string]string{io.kubernetes.container.hash: 9a1d892,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7
5690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandboxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99
aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a9501832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad3
2f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,Pod
SandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c721
82f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40bc12f3-923f-43a3-9982-95791f5aad36 name=/runtime.v1.RuntimeService/L
istContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.211126515Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65beb3a8-e746-4d30-9762-f9f5a3199438 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.211211704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65beb3a8-e746-4d30-9762-f9f5a3199438 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.212158402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdf97392-14ec-4029-bdef-58a56e824320 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.213773012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743373213746995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdf97392-14ec-4029-bdef-58a56e824320 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.214387078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79646949-98cc-49f6-9201-aae278c5c859 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.214453088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79646949-98cc-49f6-9201-aae278c5c859 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.214796615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f2e7cd4acf69859cbb9d4b96d13f525f9c11fa949d6dfe4df073711fc3c5f8,PodSandboxId:1fc04fea3dea4a041d0b34a9eef844ff109a6b9a07622e1d8e8a19c2fa031697,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721743164433140166,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmx6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d13fc5ba-4f88-4bd7-a441-2e4a43e83c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdad86e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adabfbbeaadf03ff256b579687b337ff5fe670a96d3485d8552a29c95e2fda5e,PodSandboxId:9b914e9db15ff5915e378ef6f36c7158522acbb705eb33614590b5287eaffc1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721743164284877420,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p6zcr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf5b3688-d897-4236-b360-2974b847e300,},Annotations:map[string]string{io.kubernetes.container.hash: 9a1d892,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7
5690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandboxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99
aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a9501832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad3
2f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,Pod
SandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c721
82f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79646949-98cc-49f6-9201-aae278c5c859 name=/runtime.v1.RuntimeService/L
istContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.248310154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=945e23d9-8807-4a3d-bf2a-789a313a4353 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.248392822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=945e23d9-8807-4a3d-bf2a-789a313a4353 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.249381969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b9794d5-1a82-4c97-986c-7b4db9216ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.250757251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743373250728146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b9794d5-1a82-4c97-986c-7b4db9216ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.251388384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e85fb41a-1a04-4868-97cd-440a883f8bcf name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.251472593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e85fb41a-1a04-4868-97cd-440a883f8bcf name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.251783395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f2e7cd4acf69859cbb9d4b96d13f525f9c11fa949d6dfe4df073711fc3c5f8,PodSandboxId:1fc04fea3dea4a041d0b34a9eef844ff109a6b9a07622e1d8e8a19c2fa031697,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721743164433140166,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmx6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d13fc5ba-4f88-4bd7-a441-2e4a43e83c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdad86e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adabfbbeaadf03ff256b579687b337ff5fe670a96d3485d8552a29c95e2fda5e,PodSandboxId:9b914e9db15ff5915e378ef6f36c7158522acbb705eb33614590b5287eaffc1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721743164284877420,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p6zcr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf5b3688-d897-4236-b360-2974b847e300,},Annotations:map[string]string{io.kubernetes.container.hash: 9a1d892,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7
5690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandboxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99
aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a9501832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad3
2f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,Pod
SandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c721
82f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e85fb41a-1a04-4868-97cd-440a883f8bcf name=/runtime.v1.RuntimeService/L
istContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.288925630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f35eb53f-da24-4286-bedd-937a9ce914f6 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.289280144Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f35eb53f-da24-4286-bedd-937a9ce914f6 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.290466624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e55fa115-72ff-4462-8aa9-43101d0909c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.291744058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743373291714902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e55fa115-72ff-4462-8aa9-43101d0909c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.292325414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d2b26e8-b7ee-4d5d-96cb-7f43907c50f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.292381603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d2b26e8-b7ee-4d5d-96cb-7f43907c50f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:02:53 addons-566823 crio[678]: time="2024-07-23 14:02:53.292701898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f2e7cd4acf69859cbb9d4b96d13f525f9c11fa949d6dfe4df073711fc3c5f8,PodSandboxId:1fc04fea3dea4a041d0b34a9eef844ff109a6b9a07622e1d8e8a19c2fa031697,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721743164433140166,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmx6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d13fc5ba-4f88-4bd7-a441-2e4a43e83c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdad86e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adabfbbeaadf03ff256b579687b337ff5fe670a96d3485d8552a29c95e2fda5e,PodSandboxId:9b914e9db15ff5915e378ef6f36c7158522acbb705eb33614590b5287eaffc1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721743164284877420,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p6zcr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf5b3688-d897-4236-b360-2974b847e300,},Annotations:map[string]string{io.kubernetes.container.hash: 9a1d892,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7
5690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandboxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99
aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a9501832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad3
2f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,Pod
SandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883cdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c721
82f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d2b26e8-b7ee-4d5d-96cb-7f43907c50f7 name=/runtime.v1.RuntimeService/L
istContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b7c3a17efde7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   dbe642c6a83a7       hello-world-app-6778b5fc9f-d7gff
	5eb4666a476ba       docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e                              2 minutes ago       Running             nginx                     0                   711231b4bab31       nginx
	83ee64f7ad5dc       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   a817ed4049510       headlamp-7867546754-f4tf7
	a6d043106634e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   810e5b26ca964       gcp-auth-5db96cd9b4-xvhbw
	57f2e7cd4acf6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   1fc04fea3dea4       ingress-nginx-admission-patch-xmx6s
	adabfbbeaadf0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   9b914e9db15ff       ingress-nginx-admission-create-p6zcr
	619d0d2ad819b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   d4f99f03b17e5       yakd-dashboard-799879c74f-k4b7n
	c618693edcba7       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   8903d6b2ee136       metrics-server-c59844bb4-f52cd
	16b95fc22c542       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   b74f90d4c7902       storage-provisioner
	fe0ead5b9ae19       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   c0b77f115c3b4       coredns-7db6d8ff4d-4zjr6
	75690199f376c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   f9507dae3059d       kube-proxy-dhm7l
	cc85cfb34a42b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             4 minutes ago       Running             kube-apiserver            0                   f6ca437600187       kube-apiserver-addons-566823
	395cd38ab3a5c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             4 minutes ago       Running             kube-scheduler            0                   80f1b76c4adfb       kube-scheduler-addons-566823
	455c6f2b15566       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             4 minutes ago       Running             kube-controller-manager   0                   58ae74b9e42d8       kube-controller-manager-addons-566823
	e9471edc6aed8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   db4bd96561f89       etcd-addons-566823
	
	
	==> coredns [fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117] <==
	[INFO] 10.244.0.7:41909 - 59287 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000591904s
	[INFO] 10.244.0.7:40802 - 50087 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086791s
	[INFO] 10.244.0.7:40802 - 46265 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068983s
	[INFO] 10.244.0.7:56840 - 13086 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060242s
	[INFO] 10.244.0.7:56840 - 49688 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000119346s
	[INFO] 10.244.0.7:33942 - 29435 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093629s
	[INFO] 10.244.0.7:33942 - 11253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000047752s
	[INFO] 10.244.0.7:59691 - 44171 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000099337s
	[INFO] 10.244.0.7:59691 - 2438 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003053s
	[INFO] 10.244.0.7:46306 - 11320 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030244s
	[INFO] 10.244.0.7:46306 - 13882 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072909s
	[INFO] 10.244.0.7:38834 - 28302 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027184s
	[INFO] 10.244.0.7:38834 - 29580 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054539s
	[INFO] 10.244.0.7:56251 - 16677 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033172s
	[INFO] 10.244.0.7:56251 - 53031 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000034123s
	[INFO] 10.244.0.22:58715 - 35388 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000398507s
	[INFO] 10.244.0.22:57233 - 35449 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072661s
	[INFO] 10.244.0.22:45036 - 6393 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082871s
	[INFO] 10.244.0.22:49055 - 29052 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000818041s
	[INFO] 10.244.0.22:46490 - 40988 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000387496s
	[INFO] 10.244.0.22:60739 - 46301 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000054974s
	[INFO] 10.244.0.22:40801 - 21898 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000995342s
	[INFO] 10.244.0.22:56403 - 56910 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000502122s
	[INFO] 10.244.0.25:38501 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000397183s
	[INFO] 10.244.0.25:52897 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175644s
	
	
	==> describe nodes <==
	Name:               addons-566823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-566823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=addons-566823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T13_58_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-566823
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 13:58:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-566823
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:02:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:01:11 +0000   Tue, 23 Jul 2024 13:58:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:01:11 +0000   Tue, 23 Jul 2024 13:58:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:01:11 +0000   Tue, 23 Jul 2024 13:58:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:01:11 +0000   Tue, 23 Jul 2024 13:58:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    addons-566823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 29339ff5f01d4e0484eccd5ff044a154
	  System UUID:                29339ff5-f01d-4e04-84ec-cd5ff044a154
	  Boot ID:                    3dc7844a-05b8-4110-a26d-f3272538bc6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-d7gff         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-5db96cd9b4-xvhbw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  headlamp                    headlamp-7867546754-f4tf7                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                 coredns-7db6d8ff4d-4zjr6                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m32s
	  kube-system                 etcd-addons-566823                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-apiserver-addons-566823             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-controller-manager-addons-566823    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-proxy-dhm7l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-scheduler-addons-566823             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 metrics-server-c59844bb4-f52cd           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  yakd-dashboard              yakd-dashboard-799879c74f-k4b7n          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m30s                  kube-proxy       
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s (x2 over 4m46s)  kubelet          Node addons-566823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x2 over 4m46s)  kubelet          Node addons-566823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x2 over 4m46s)  kubelet          Node addons-566823 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m45s                  kubelet          Node addons-566823 status is now: NodeReady
	  Normal  RegisteredNode           4m33s                  node-controller  Node addons-566823 event: Registered Node addons-566823 in Controller
	
	
	==> dmesg <==
	[  +5.109997] kauditd_printk_skb: 123 callbacks suppressed
	[  +5.246645] kauditd_printk_skb: 165 callbacks suppressed
	[  +6.825625] kauditd_printk_skb: 36 callbacks suppressed
	[ +16.407855] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.056172] kauditd_printk_skb: 14 callbacks suppressed
	[Jul23 13:59] kauditd_printk_skb: 13 callbacks suppressed
	[ +11.955748] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.201226] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.450815] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.470618] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.130135] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.097158] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.884630] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.912545] kauditd_printk_skb: 15 callbacks suppressed
	[Jul23 14:00] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.381272] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.801597] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.038707] kauditd_printk_skb: 36 callbacks suppressed
	[ +21.305403] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.306249] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.421313] kauditd_printk_skb: 3 callbacks suppressed
	[Jul23 14:01] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.319621] kauditd_printk_skb: 33 callbacks suppressed
	[Jul23 14:02] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.001702] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce] <==
	{"level":"warn","ts":"2024-07-23T13:59:48.330536Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:47.935652Z","time spent":"394.765159ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hqxlxdx7ypjegrayddcaqhf55u\" mod_revision:1115 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hqxlxdx7ypjegrayddcaqhf55u\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hqxlxdx7ypjegrayddcaqhf55u\" > >"}
	{"level":"warn","ts":"2024-07-23T13:59:48.330769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"328.410134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T13:59:48.330825Z","caller":"traceutil/trace.go:171","msg":"trace[1486851517] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1164; }","duration":"328.490434ms","start":"2024-07-23T13:59:48.002324Z","end":"2024-07-23T13:59:48.330814Z","steps":["trace[1486851517] 'agreement among raft nodes before linearized reading'  (duration: 328.361571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:48.33085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:48.002311Z","time spent":"328.533717ms","remote":"127.0.0.1:60818","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-23T13:59:48.331104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.372845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-23T13:59:48.331235Z","caller":"traceutil/trace.go:171","msg":"trace[479918349] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:1164; }","duration":"300.486059ms","start":"2024-07-23T13:59:48.030668Z","end":"2024-07-23T13:59:48.331154Z","steps":["trace[479918349] 'agreement among raft nodes before linearized reading'  (duration: 300.121192ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:48.331285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:48.030654Z","time spent":"300.620481ms","remote":"127.0.0.1:50090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":10,"response size":29,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true "}
	{"level":"warn","ts":"2024-07-23T13:59:48.331383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.189743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-07-23T13:59:48.331425Z","caller":"traceutil/trace.go:171","msg":"trace[1054619235] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1164; }","duration":"212.251184ms","start":"2024-07-23T13:59:48.119166Z","end":"2024-07-23T13:59:48.331417Z","steps":["trace[1054619235] 'agreement among raft nodes before linearized reading'  (duration: 212.097316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:48.331246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.966283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-f52cd.17e4dc4466c8be25\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-07-23T13:59:48.331537Z","caller":"traceutil/trace.go:171","msg":"trace[723786253] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-f52cd.17e4dc4466c8be25; range_end:; response_count:1; response_revision:1164; }","duration":"117.285124ms","start":"2024-07-23T13:59:48.214243Z","end":"2024-07-23T13:59:48.331528Z","steps":["trace[723786253] 'agreement among raft nodes before linearized reading'  (duration: 116.923956ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T13:59:55.366357Z","caller":"traceutil/trace.go:171","msg":"trace[1043766184] linearizableReadLoop","detail":"{readStateIndex:1263; appliedIndex:1262; }","duration":"384.731123ms","start":"2024-07-23T13:59:54.981608Z","end":"2024-07-23T13:59:55.366339Z","steps":["trace[1043766184] 'read index received'  (duration: 384.546657ms)","trace[1043766184] 'applied index is now lower than readState.Index'  (duration: 183.791µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T13:59:55.366541Z","caller":"traceutil/trace.go:171","msg":"trace[1808763166] transaction","detail":"{read_only:false; response_revision:1226; number_of_response:1; }","duration":"385.1786ms","start":"2024-07-23T13:59:54.981349Z","end":"2024-07-23T13:59:55.366528Z","steps":["trace[1808763166] 'process raft request'  (duration: 384.877219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:55.367124Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:54.981333Z","time spent":"385.731876ms","remote":"127.0.0.1:50068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-bsfbc.17e4dc4b0c805b2e\" mod_revision:1216 > success:<request_put:<key:\"/registry/events/gadget/gadget-bsfbc.17e4dc4b0c805b2e\" value_size:693 lease:3156619680085819913 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-bsfbc.17e4dc4b0c805b2e\" > >"}
	{"level":"warn","ts":"2024-07-23T13:59:55.366642Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.011965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-23T13:59:55.367775Z","caller":"traceutil/trace.go:171","msg":"trace[806612147] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1226; }","duration":"386.173308ms","start":"2024-07-23T13:59:54.98159Z","end":"2024-07-23T13:59:55.367763Z","steps":["trace[806612147] 'agreement among raft nodes before linearized reading'  (duration: 384.974735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:55.368324Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:54.981582Z","time spent":"386.725562ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"info","ts":"2024-07-23T13:59:59.770911Z","caller":"traceutil/trace.go:171","msg":"trace[628963373] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"376.433359ms","start":"2024-07-23T13:59:59.394462Z","end":"2024-07-23T13:59:59.770895Z","steps":["trace[628963373] 'process raft request'  (duration: 376.213233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:59.77206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:59.394445Z","time spent":"377.38382ms","remote":"127.0.0.1:50168","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1248 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-23T14:00:50.78706Z","caller":"traceutil/trace.go:171","msg":"trace[411388456] linearizableReadLoop","detail":"{readStateIndex:1605; appliedIndex:1604; }","duration":"302.171606ms","start":"2024-07-23T14:00:50.484806Z","end":"2024-07-23T14:00:50.786977Z","steps":["trace[411388456] 'read index received'  (duration: 302.026289ms)","trace[411388456] 'applied index is now lower than readState.Index'  (duration: 144.832µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:00:50.787334Z","caller":"traceutil/trace.go:171","msg":"trace[1487591093] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"341.71765ms","start":"2024-07-23T14:00:50.445599Z","end":"2024-07-23T14:00:50.787317Z","steps":["trace[1487591093] 'process raft request'  (duration: 341.276228ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:00:50.787464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:00:50.445584Z","time spent":"341.792142ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1537 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-07-23T14:00:50.787727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.916272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-23T14:00:50.787773Z","caller":"traceutil/trace.go:171","msg":"trace[1278567850] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1551; }","duration":"302.986535ms","start":"2024-07-23T14:00:50.484778Z","end":"2024-07-23T14:00:50.787765Z","steps":["trace[1278567850] 'agreement among raft nodes before linearized reading'  (duration: 302.881592ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:00:50.787808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:00:50.484765Z","time spent":"303.037249ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	
	
	==> gcp-auth [a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008] <==
	2024/07/23 13:59:46 GCP Auth Webhook started!
	2024/07/23 13:59:50 Ready to marshal response ...
	2024/07/23 13:59:50 Ready to write response ...
	2024/07/23 13:59:50 Ready to marshal response ...
	2024/07/23 13:59:50 Ready to write response ...
	2024/07/23 13:59:50 Ready to marshal response ...
	2024/07/23 13:59:50 Ready to write response ...
	2024/07/23 13:59:55 Ready to marshal response ...
	2024/07/23 13:59:55 Ready to write response ...
	2024/07/23 14:00:01 Ready to marshal response ...
	2024/07/23 14:00:01 Ready to write response ...
	2024/07/23 14:00:08 Ready to marshal response ...
	2024/07/23 14:00:08 Ready to write response ...
	2024/07/23 14:00:08 Ready to marshal response ...
	2024/07/23 14:00:08 Ready to write response ...
	2024/07/23 14:00:19 Ready to marshal response ...
	2024/07/23 14:00:19 Ready to write response ...
	2024/07/23 14:00:20 Ready to marshal response ...
	2024/07/23 14:00:20 Ready to write response ...
	2024/07/23 14:00:43 Ready to marshal response ...
	2024/07/23 14:00:43 Ready to write response ...
	2024/07/23 14:01:18 Ready to marshal response ...
	2024/07/23 14:01:18 Ready to write response ...
	2024/07/23 14:02:43 Ready to marshal response ...
	2024/07/23 14:02:43 Ready to write response ...
	
	
	==> kernel <==
	 14:02:53 up 5 min,  0 users,  load average: 0.61, 1.06, 0.55
	Linux addons-566823 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cc85cfb34a42b2d7f7a7917a3bafb4dd99aa24543951201740915568b3c687e9] <==
	W0723 14:00:09.269355       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 14:00:09.269410       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 14:00:09.270532       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 14:00:09.632890       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0723 14:00:15.007887       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0723 14:00:16.045373       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0723 14:00:20.736659       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0723 14:00:20.954345       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.5.113"}
	E0723 14:00:35.931876       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0723 14:00:57.114683       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0723 14:01:34.514799       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.514855       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.537327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.537568       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.566982       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.567191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.624482       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.624536       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.677035       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.677085       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0723 14:01:35.567239       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0723 14:01:35.677639       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0723 14:01:35.686208       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0723 14:02:43.501152       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.78.209"}
	
	
	==> kube-controller-manager [455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096] <==
	I0723 14:01:51.442703       1 shared_informer.go:320] Caches are synced for garbage collector
	W0723 14:01:53.343410       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:01:53.343453       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:01:54.573263       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:01:54.573311       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:01:56.905708       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:01:56.905840       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:02:14.344981       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:02:14.345163       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:02:16.919448       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:02:16.919500       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:02:19.104529       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:02:19.104578       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:02:29.547434       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:02:29.547503       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0723 14:02:43.363636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="42.774288ms"
	I0723 14:02:43.394563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="30.876771ms"
	I0723 14:02:43.395167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="189.673µs"
	I0723 14:02:45.375420       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0723 14:02:45.383944       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0723 14:02:45.384242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.415µs"
	I0723 14:02:46.469220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="11.499983ms"
	I0723 14:02:46.469305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="33.479µs"
	W0723 14:02:48.405913       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:02:48.406093       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [75690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92] <==
	I0723 13:58:23.025071       1 server_linux.go:69] "Using iptables proxy"
	I0723 13:58:23.055845       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	I0723 13:58:23.144867       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 13:58:23.144912       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 13:58:23.144928       1 server_linux.go:165] "Using iptables Proxier"
	I0723 13:58:23.149761       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 13:58:23.149945       1 server.go:872] "Version info" version="v1.30.3"
	I0723 13:58:23.149957       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 13:58:23.151928       1 config.go:192] "Starting service config controller"
	I0723 13:58:23.151943       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 13:58:23.151979       1 config.go:101] "Starting endpoint slice config controller"
	I0723 13:58:23.151984       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 13:58:23.155386       1 config.go:319] "Starting node config controller"
	I0723 13:58:23.155394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 13:58:23.252675       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 13:58:23.252689       1 shared_informer.go:320] Caches are synced for service config
	I0723 13:58:23.256371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad32f3f45094e9c1] <==
	W0723 13:58:05.015845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.016667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:05.015881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 13:58:05.016679       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 13:58:05.015913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 13:58:05.016690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 13:58:05.015946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 13:58:05.016701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 13:58:05.015983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.016714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:05.015565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0723 13:58:05.016726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0723 13:58:05.016923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 13:58:05.016975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 13:58:05.932434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.932488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:05.967100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.967142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:06.064347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 13:58:06.064392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 13:58:06.152152       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:06.152188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:06.456527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 13:58:06.457105       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 13:58:08.795481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:02:43 addons-566823 kubelet[1269]: I0723 14:02:43.360866    1269 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af26a5d-3cc4-4627-b99f-49f1153b5fac" containerName="csi-resizer"
	Jul 23 14:02:43 addons-566823 kubelet[1269]: I0723 14:02:43.360896    1269 memory_manager.go:354] "RemoveStaleState removing state" podUID="77143da7-595d-4dc8-92ab-1712b2322583" containerName="task-pv-container"
	Jul 23 14:02:43 addons-566823 kubelet[1269]: I0723 14:02:43.450617    1269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq5h6\" (UniqueName: \"kubernetes.io/projected/15be02a5-428d-42b8-9e65-a3be389fac3e-kube-api-access-hq5h6\") pod \"hello-world-app-6778b5fc9f-d7gff\" (UID: \"15be02a5-428d-42b8-9e65-a3be389fac3e\") " pod="default/hello-world-app-6778b5fc9f-d7gff"
	Jul 23 14:02:43 addons-566823 kubelet[1269]: I0723 14:02:43.450811    1269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/15be02a5-428d-42b8-9e65-a3be389fac3e-gcp-creds\") pod \"hello-world-app-6778b5fc9f-d7gff\" (UID: \"15be02a5-428d-42b8-9e65-a3be389fac3e\") " pod="default/hello-world-app-6778b5fc9f-d7gff"
	Jul 23 14:02:44 addons-566823 kubelet[1269]: I0723 14:02:44.429322    1269 scope.go:117] "RemoveContainer" containerID="0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed"
	Jul 23 14:02:44 addons-566823 kubelet[1269]: I0723 14:02:44.446205    1269 scope.go:117] "RemoveContainer" containerID="0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed"
	Jul 23 14:02:44 addons-566823 kubelet[1269]: E0723 14:02:44.446655    1269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed\": container with ID starting with 0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed not found: ID does not exist" containerID="0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed"
	Jul 23 14:02:44 addons-566823 kubelet[1269]: I0723 14:02:44.446701    1269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed"} err="failed to get container status \"0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed\": rpc error: code = NotFound desc = could not find container \"0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed\": container with ID starting with 0eba547435ca7a706d540d432afbb8adac3fedb925c435ceba4c5033eb70caed not found: ID does not exist"
	Jul 23 14:02:44 addons-566823 kubelet[1269]: I0723 14:02:44.457114    1269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktpqj\" (UniqueName: \"kubernetes.io/projected/03cc5ad6-8256-43b3-b473-93939d6d75cd-kube-api-access-ktpqj\") pod \"03cc5ad6-8256-43b3-b473-93939d6d75cd\" (UID: \"03cc5ad6-8256-43b3-b473-93939d6d75cd\") "
	Jul 23 14:02:44 addons-566823 kubelet[1269]: I0723 14:02:44.459941    1269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cc5ad6-8256-43b3-b473-93939d6d75cd-kube-api-access-ktpqj" (OuterVolumeSpecName: "kube-api-access-ktpqj") pod "03cc5ad6-8256-43b3-b473-93939d6d75cd" (UID: "03cc5ad6-8256-43b3-b473-93939d6d75cd"). InnerVolumeSpecName "kube-api-access-ktpqj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:02:44 addons-566823 kubelet[1269]: I0723 14:02:44.557768    1269 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ktpqj\" (UniqueName: \"kubernetes.io/projected/03cc5ad6-8256-43b3-b473-93939d6d75cd-kube-api-access-ktpqj\") on node \"addons-566823\" DevicePath \"\""
	Jul 23 14:02:45 addons-566823 kubelet[1269]: I0723 14:02:45.795630    1269 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cc5ad6-8256-43b3-b473-93939d6d75cd" path="/var/lib/kubelet/pods/03cc5ad6-8256-43b3-b473-93939d6d75cd/volumes"
	Jul 23 14:02:45 addons-566823 kubelet[1269]: I0723 14:02:45.796083    1269 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5b3688-d897-4236-b360-2974b847e300" path="/var/lib/kubelet/pods/bf5b3688-d897-4236-b360-2974b847e300/volumes"
	Jul 23 14:02:45 addons-566823 kubelet[1269]: I0723 14:02:45.796478    1269 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13fc5ba-4f88-4bd7-a441-2e4a43e83c81" path="/var/lib/kubelet/pods/d13fc5ba-4f88-4bd7-a441-2e4a43e83c81/volumes"
	Jul 23 14:02:48 addons-566823 kubelet[1269]: I0723 14:02:48.688926    1269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d4ead61-436b-4327-9541-4c21bc59b3f9-webhook-cert\") pod \"2d4ead61-436b-4327-9541-4c21bc59b3f9\" (UID: \"2d4ead61-436b-4327-9541-4c21bc59b3f9\") "
	Jul 23 14:02:48 addons-566823 kubelet[1269]: I0723 14:02:48.689029    1269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v9lm\" (UniqueName: \"kubernetes.io/projected/2d4ead61-436b-4327-9541-4c21bc59b3f9-kube-api-access-5v9lm\") pod \"2d4ead61-436b-4327-9541-4c21bc59b3f9\" (UID: \"2d4ead61-436b-4327-9541-4c21bc59b3f9\") "
	Jul 23 14:02:48 addons-566823 kubelet[1269]: I0723 14:02:48.692190    1269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4ead61-436b-4327-9541-4c21bc59b3f9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2d4ead61-436b-4327-9541-4c21bc59b3f9" (UID: "2d4ead61-436b-4327-9541-4c21bc59b3f9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 23 14:02:48 addons-566823 kubelet[1269]: I0723 14:02:48.692641    1269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4ead61-436b-4327-9541-4c21bc59b3f9-kube-api-access-5v9lm" (OuterVolumeSpecName: "kube-api-access-5v9lm") pod "2d4ead61-436b-4327-9541-4c21bc59b3f9" (UID: "2d4ead61-436b-4327-9541-4c21bc59b3f9"). InnerVolumeSpecName "kube-api-access-5v9lm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:02:48 addons-566823 kubelet[1269]: I0723 14:02:48.789648    1269 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d4ead61-436b-4327-9541-4c21bc59b3f9-webhook-cert\") on node \"addons-566823\" DevicePath \"\""
	Jul 23 14:02:48 addons-566823 kubelet[1269]: I0723 14:02:48.789691    1269 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5v9lm\" (UniqueName: \"kubernetes.io/projected/2d4ead61-436b-4327-9541-4c21bc59b3f9-kube-api-access-5v9lm\") on node \"addons-566823\" DevicePath \"\""
	Jul 23 14:02:49 addons-566823 kubelet[1269]: I0723 14:02:49.456261    1269 scope.go:117] "RemoveContainer" containerID="d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913"
	Jul 23 14:02:49 addons-566823 kubelet[1269]: I0723 14:02:49.481309    1269 scope.go:117] "RemoveContainer" containerID="d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913"
	Jul 23 14:02:49 addons-566823 kubelet[1269]: E0723 14:02:49.481872    1269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913\": container with ID starting with d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913 not found: ID does not exist" containerID="d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913"
	Jul 23 14:02:49 addons-566823 kubelet[1269]: I0723 14:02:49.481907    1269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913"} err="failed to get container status \"d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913\": rpc error: code = NotFound desc = could not find container \"d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913\": container with ID starting with d4390a3c1788d61c3af1b6795a8539b4282824a38b51ff264a4e7174f2f55913 not found: ID does not exist"
	Jul 23 14:02:49 addons-566823 kubelet[1269]: I0723 14:02:49.796847    1269 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4ead61-436b-4327-9541-4c21bc59b3f9" path="/var/lib/kubelet/pods/2d4ead61-436b-4327-9541-4c21bc59b3f9/volumes"
	
	
	==> storage-provisioner [16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff] <==
	I0723 13:58:28.773519       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 13:58:29.054175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 13:58:29.054275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 13:58:29.206327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 13:58:29.249446       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad92d0f1-0afd-4aae-a180-a98760ca320f", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-566823_99dbb502-d042-4986-a2e6-ab50484211e6 became leader
	I0723 13:58:29.249551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-566823_99dbb502-d042-4986-a2e6-ab50484211e6!
	I0723 13:58:29.512096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-566823_99dbb502-d042-4986-a2e6-ab50484211e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-566823 -n addons-566823
helpers_test.go:261: (dbg) Run:  kubectl --context addons-566823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (362.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.851676ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-f52cd" [6b45f2b1-e48c-4097-aa53-5c2f5fea4806] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005712444s
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (70.257217ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-566823, age: 2m12.498699436s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (70.386542ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 2m0.770908676s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (74.81326ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 2m5.427632887s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (65.153167ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 2m9.690792642s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (66.360506ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 2m22.157187935s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (65.314986ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 2m31.038303768s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (68.755039ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 2m44.776235708s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (65.244401ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 3m25.454837254s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (61.208018ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 4m25.96316027s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (61.72428ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 5m39.995375421s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (65.368166ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 6m55.654930096s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-566823 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-566823 top pods -n kube-system: exit status 1 (62.378753ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-4zjr6, age: 7m53.244207425s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-566823 -n addons-566823
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-566823 logs -n 25: (1.343249633s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-788360                                                                     | download-only-788360 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-344682                                                                     | download-only-344682 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-055184                                                                     | download-only-055184 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-788360                                                                     | download-only-788360 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-132421 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC |                     |
	|         | binary-mirror-132421                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32931                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-132421                                                                     | binary-mirror-132421 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC |                     |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC |                     |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-566823 --wait=true                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 13:59 UTC | 23 Jul 24 13:59 UTC |
	|         | -p addons-566823                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-566823 ip                                                                            | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | -p addons-566823                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | addons-566823                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-566823 ssh cat                                                                       | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:00 UTC |
	|         | /opt/local-path-provisioner/pvc-c8cbfc9c-f3f6-4373-91f9-dcf10e6a4265_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC | 23 Jul 24 14:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-566823 ssh curl -s                                                                   | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-566823 addons                                                                        | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:01 UTC | 23 Jul 24 14:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-566823 addons                                                                        | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:01 UTC | 23 Jul 24 14:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-566823 ip                                                                            | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:02 UTC | 23 Jul 24 14:02 UTC |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:02 UTC | 23 Jul 24 14:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-566823 addons disable                                                                | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:02 UTC | 23 Jul 24 14:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-566823 addons                                                                        | addons-566823        | jenkins | v1.33.1 | 23 Jul 24 14:06 UTC | 23 Jul 24 14:06 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 13:57:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 13:57:26.258787   19502 out.go:291] Setting OutFile to fd 1 ...
	I0723 13:57:26.259024   19502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:57:26.259032   19502 out.go:304] Setting ErrFile to fd 2...
	I0723 13:57:26.259036   19502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:57:26.259194   19502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 13:57:26.259737   19502 out.go:298] Setting JSON to false
	I0723 13:57:26.260524   19502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2392,"bootTime":1721740654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 13:57:26.260579   19502 start.go:139] virtualization: kvm guest
	I0723 13:57:26.262666   19502 out.go:177] * [addons-566823] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 13:57:26.263904   19502 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 13:57:26.263958   19502 notify.go:220] Checking for updates...
	I0723 13:57:26.266370   19502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 13:57:26.267711   19502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 13:57:26.268942   19502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:57:26.270070   19502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 13:57:26.271292   19502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 13:57:26.272503   19502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 13:57:26.303755   19502 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 13:57:26.304876   19502 start.go:297] selected driver: kvm2
	I0723 13:57:26.304897   19502 start.go:901] validating driver "kvm2" against <nil>
	I0723 13:57:26.304922   19502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 13:57:26.305633   19502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:57:26.305722   19502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 13:57:26.319951   19502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 13:57:26.319997   19502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 13:57:26.320229   19502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 13:57:26.320293   19502 cni.go:84] Creating CNI manager for ""
	I0723 13:57:26.320320   19502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:57:26.320328   19502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 13:57:26.320406   19502 start.go:340] cluster config:
	{Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 13:57:26.320547   19502 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:57:26.322258   19502 out.go:177] * Starting "addons-566823" primary control-plane node in "addons-566823" cluster
	I0723 13:57:26.323420   19502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 13:57:26.323450   19502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 13:57:26.323459   19502 cache.go:56] Caching tarball of preloaded images
	I0723 13:57:26.323536   19502 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 13:57:26.323548   19502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 13:57:26.323866   19502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/config.json ...
	I0723 13:57:26.323889   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/config.json: {Name:mk9521b81ec09d3952c01470afbc69b6bbfc2443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:26.324033   19502 start.go:360] acquireMachinesLock for addons-566823: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 13:57:26.324091   19502 start.go:364] duration metric: took 41.807µs to acquireMachinesLock for "addons-566823"
	I0723 13:57:26.324111   19502 start.go:93] Provisioning new machine with config: &{Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 13:57:26.324189   19502 start.go:125] createHost starting for "" (driver="kvm2")
	I0723 13:57:26.326081   19502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0723 13:57:26.326239   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:57:26.326284   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:57:26.340398   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0723 13:57:26.340784   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:57:26.341245   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:57:26.341262   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:57:26.341593   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:57:26.341747   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:26.341865   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:26.341980   19502 start.go:159] libmachine.API.Create for "addons-566823" (driver="kvm2")
	I0723 13:57:26.342009   19502 client.go:168] LocalClient.Create starting
	I0723 13:57:26.342050   19502 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 13:57:26.627266   19502 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 13:57:26.784601   19502 main.go:141] libmachine: Running pre-create checks...
	I0723 13:57:26.784625   19502 main.go:141] libmachine: (addons-566823) Calling .PreCreateCheck
	I0723 13:57:26.785101   19502 main.go:141] libmachine: (addons-566823) Calling .GetConfigRaw
	I0723 13:57:26.785541   19502 main.go:141] libmachine: Creating machine...
	I0723 13:57:26.785556   19502 main.go:141] libmachine: (addons-566823) Calling .Create
	I0723 13:57:26.785716   19502 main.go:141] libmachine: (addons-566823) Creating KVM machine...
	I0723 13:57:26.787096   19502 main.go:141] libmachine: (addons-566823) DBG | found existing default KVM network
	I0723 13:57:26.787808   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:26.787639   19524 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0723 13:57:26.787841   19502 main.go:141] libmachine: (addons-566823) DBG | created network xml: 
	I0723 13:57:26.787857   19502 main.go:141] libmachine: (addons-566823) DBG | <network>
	I0723 13:57:26.787869   19502 main.go:141] libmachine: (addons-566823) DBG |   <name>mk-addons-566823</name>
	I0723 13:57:26.787880   19502 main.go:141] libmachine: (addons-566823) DBG |   <dns enable='no'/>
	I0723 13:57:26.787891   19502 main.go:141] libmachine: (addons-566823) DBG |   
	I0723 13:57:26.787904   19502 main.go:141] libmachine: (addons-566823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0723 13:57:26.787921   19502 main.go:141] libmachine: (addons-566823) DBG |     <dhcp>
	I0723 13:57:26.787932   19502 main.go:141] libmachine: (addons-566823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0723 13:57:26.787940   19502 main.go:141] libmachine: (addons-566823) DBG |     </dhcp>
	I0723 13:57:26.787949   19502 main.go:141] libmachine: (addons-566823) DBG |   </ip>
	I0723 13:57:26.787955   19502 main.go:141] libmachine: (addons-566823) DBG |   
	I0723 13:57:26.787963   19502 main.go:141] libmachine: (addons-566823) DBG | </network>
	I0723 13:57:26.787969   19502 main.go:141] libmachine: (addons-566823) DBG | 
	I0723 13:57:26.792991   19502 main.go:141] libmachine: (addons-566823) DBG | trying to create private KVM network mk-addons-566823 192.168.39.0/24...
	I0723 13:57:26.858781   19502 main.go:141] libmachine: (addons-566823) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823 ...
	I0723 13:57:26.858824   19502 main.go:141] libmachine: (addons-566823) DBG | private KVM network mk-addons-566823 192.168.39.0/24 created
	I0723 13:57:26.858840   19502 main.go:141] libmachine: (addons-566823) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 13:57:26.858857   19502 main.go:141] libmachine: (addons-566823) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 13:57:26.858868   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:26.858719   19524 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:57:27.110056   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:27.109886   19524 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa...
	I0723 13:57:27.245741   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:27.245626   19524 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/addons-566823.rawdisk...
	I0723 13:57:27.245763   19502 main.go:141] libmachine: (addons-566823) DBG | Writing magic tar header
	I0723 13:57:27.245775   19502 main.go:141] libmachine: (addons-566823) DBG | Writing SSH key tar header
	I0723 13:57:27.245887   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:27.245806   19524 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823 ...
	I0723 13:57:27.245922   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823
	I0723 13:57:27.245976   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823 (perms=drwx------)
	I0723 13:57:27.245996   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 13:57:27.246008   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 13:57:27.246027   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:57:27.246040   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 13:57:27.246049   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 13:57:27.246061   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 13:57:27.246071   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home/jenkins
	I0723 13:57:27.246082   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 13:57:27.246094   19502 main.go:141] libmachine: (addons-566823) DBG | Checking permissions on dir: /home
	I0723 13:57:27.246108   19502 main.go:141] libmachine: (addons-566823) DBG | Skipping /home - not owner
	I0723 13:57:27.246121   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 13:57:27.246132   19502 main.go:141] libmachine: (addons-566823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 13:57:27.246149   19502 main.go:141] libmachine: (addons-566823) Creating domain...
	I0723 13:57:27.247150   19502 main.go:141] libmachine: (addons-566823) define libvirt domain using xml: 
	I0723 13:57:27.247173   19502 main.go:141] libmachine: (addons-566823) <domain type='kvm'>
	I0723 13:57:27.247183   19502 main.go:141] libmachine: (addons-566823)   <name>addons-566823</name>
	I0723 13:57:27.247190   19502 main.go:141] libmachine: (addons-566823)   <memory unit='MiB'>4000</memory>
	I0723 13:57:27.247199   19502 main.go:141] libmachine: (addons-566823)   <vcpu>2</vcpu>
	I0723 13:57:27.247211   19502 main.go:141] libmachine: (addons-566823)   <features>
	I0723 13:57:27.247223   19502 main.go:141] libmachine: (addons-566823)     <acpi/>
	I0723 13:57:27.247229   19502 main.go:141] libmachine: (addons-566823)     <apic/>
	I0723 13:57:27.247234   19502 main.go:141] libmachine: (addons-566823)     <pae/>
	I0723 13:57:27.247239   19502 main.go:141] libmachine: (addons-566823)     
	I0723 13:57:27.247245   19502 main.go:141] libmachine: (addons-566823)   </features>
	I0723 13:57:27.247254   19502 main.go:141] libmachine: (addons-566823)   <cpu mode='host-passthrough'>
	I0723 13:57:27.247258   19502 main.go:141] libmachine: (addons-566823)   
	I0723 13:57:27.247264   19502 main.go:141] libmachine: (addons-566823)   </cpu>
	I0723 13:57:27.247269   19502 main.go:141] libmachine: (addons-566823)   <os>
	I0723 13:57:27.247277   19502 main.go:141] libmachine: (addons-566823)     <type>hvm</type>
	I0723 13:57:27.247286   19502 main.go:141] libmachine: (addons-566823)     <boot dev='cdrom'/>
	I0723 13:57:27.247296   19502 main.go:141] libmachine: (addons-566823)     <boot dev='hd'/>
	I0723 13:57:27.247318   19502 main.go:141] libmachine: (addons-566823)     <bootmenu enable='no'/>
	I0723 13:57:27.247323   19502 main.go:141] libmachine: (addons-566823)   </os>
	I0723 13:57:27.247331   19502 main.go:141] libmachine: (addons-566823)   <devices>
	I0723 13:57:27.247346   19502 main.go:141] libmachine: (addons-566823)     <disk type='file' device='cdrom'>
	I0723 13:57:27.247361   19502 main.go:141] libmachine: (addons-566823)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/boot2docker.iso'/>
	I0723 13:57:27.247372   19502 main.go:141] libmachine: (addons-566823)       <target dev='hdc' bus='scsi'/>
	I0723 13:57:27.247380   19502 main.go:141] libmachine: (addons-566823)       <readonly/>
	I0723 13:57:27.247387   19502 main.go:141] libmachine: (addons-566823)     </disk>
	I0723 13:57:27.247394   19502 main.go:141] libmachine: (addons-566823)     <disk type='file' device='disk'>
	I0723 13:57:27.247402   19502 main.go:141] libmachine: (addons-566823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 13:57:27.247428   19502 main.go:141] libmachine: (addons-566823)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/addons-566823.rawdisk'/>
	I0723 13:57:27.247450   19502 main.go:141] libmachine: (addons-566823)       <target dev='hda' bus='virtio'/>
	I0723 13:57:27.247460   19502 main.go:141] libmachine: (addons-566823)     </disk>
	I0723 13:57:27.247468   19502 main.go:141] libmachine: (addons-566823)     <interface type='network'>
	I0723 13:57:27.247475   19502 main.go:141] libmachine: (addons-566823)       <source network='mk-addons-566823'/>
	I0723 13:57:27.247482   19502 main.go:141] libmachine: (addons-566823)       <model type='virtio'/>
	I0723 13:57:27.247487   19502 main.go:141] libmachine: (addons-566823)     </interface>
	I0723 13:57:27.247494   19502 main.go:141] libmachine: (addons-566823)     <interface type='network'>
	I0723 13:57:27.247500   19502 main.go:141] libmachine: (addons-566823)       <source network='default'/>
	I0723 13:57:27.247507   19502 main.go:141] libmachine: (addons-566823)       <model type='virtio'/>
	I0723 13:57:27.247512   19502 main.go:141] libmachine: (addons-566823)     </interface>
	I0723 13:57:27.247518   19502 main.go:141] libmachine: (addons-566823)     <serial type='pty'>
	I0723 13:57:27.247525   19502 main.go:141] libmachine: (addons-566823)       <target port='0'/>
	I0723 13:57:27.247540   19502 main.go:141] libmachine: (addons-566823)     </serial>
	I0723 13:57:27.247552   19502 main.go:141] libmachine: (addons-566823)     <console type='pty'>
	I0723 13:57:27.247560   19502 main.go:141] libmachine: (addons-566823)       <target type='serial' port='0'/>
	I0723 13:57:27.247565   19502 main.go:141] libmachine: (addons-566823)     </console>
	I0723 13:57:27.247572   19502 main.go:141] libmachine: (addons-566823)     <rng model='virtio'>
	I0723 13:57:27.247579   19502 main.go:141] libmachine: (addons-566823)       <backend model='random'>/dev/random</backend>
	I0723 13:57:27.247585   19502 main.go:141] libmachine: (addons-566823)     </rng>
	I0723 13:57:27.247591   19502 main.go:141] libmachine: (addons-566823)     
	I0723 13:57:27.247597   19502 main.go:141] libmachine: (addons-566823)     
	I0723 13:57:27.247602   19502 main.go:141] libmachine: (addons-566823)   </devices>
	I0723 13:57:27.247609   19502 main.go:141] libmachine: (addons-566823) </domain>
	I0723 13:57:27.247619   19502 main.go:141] libmachine: (addons-566823) 
	I0723 13:57:27.253594   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:7f:41:11 in network default
	I0723 13:57:27.254205   19502 main.go:141] libmachine: (addons-566823) Ensuring networks are active...
	I0723 13:57:27.254223   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:27.255123   19502 main.go:141] libmachine: (addons-566823) Ensuring network default is active
	I0723 13:57:27.255511   19502 main.go:141] libmachine: (addons-566823) Ensuring network mk-addons-566823 is active
	I0723 13:57:27.255998   19502 main.go:141] libmachine: (addons-566823) Getting domain xml...
	I0723 13:57:27.256856   19502 main.go:141] libmachine: (addons-566823) Creating domain...
	I0723 13:57:28.697829   19502 main.go:141] libmachine: (addons-566823) Waiting to get IP...
	I0723 13:57:28.698600   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:28.699020   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:28.699042   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:28.699002   19524 retry.go:31] will retry after 307.94193ms: waiting for machine to come up
	I0723 13:57:29.008603   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:29.008986   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:29.009013   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:29.008949   19524 retry.go:31] will retry after 384.73915ms: waiting for machine to come up
	I0723 13:57:29.396898   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:29.397404   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:29.397435   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:29.397335   19524 retry.go:31] will retry after 426.861857ms: waiting for machine to come up
	I0723 13:57:29.825896   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:29.826286   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:29.826327   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:29.826251   19524 retry.go:31] will retry after 439.359176ms: waiting for machine to come up
	I0723 13:57:30.266982   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:30.267497   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:30.267527   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:30.267432   19524 retry.go:31] will retry after 536.9439ms: waiting for machine to come up
	I0723 13:57:30.806186   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:30.806607   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:30.806635   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:30.806566   19524 retry.go:31] will retry after 615.974579ms: waiting for machine to come up
	I0723 13:57:31.423980   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:31.424516   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:31.424544   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:31.424481   19524 retry.go:31] will retry after 786.794896ms: waiting for machine to come up
	I0723 13:57:32.212282   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:32.212640   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:32.212668   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:32.212600   19524 retry.go:31] will retry after 1.0057163s: waiting for machine to come up
	I0723 13:57:33.219712   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:33.220118   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:33.220143   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:33.220076   19524 retry.go:31] will retry after 1.30408869s: waiting for machine to come up
	I0723 13:57:34.526732   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:34.527161   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:34.527182   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:34.527126   19524 retry.go:31] will retry after 2.04064909s: waiting for machine to come up
	I0723 13:57:36.569195   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:36.569672   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:36.569699   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:36.569626   19524 retry.go:31] will retry after 1.957363737s: waiting for machine to come up
	I0723 13:57:38.529699   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:38.530174   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:38.530198   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:38.530084   19524 retry.go:31] will retry after 2.759683998s: waiting for machine to come up
	I0723 13:57:41.293038   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:41.293546   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:41.293569   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:41.293474   19524 retry.go:31] will retry after 3.612061693s: waiting for machine to come up
	I0723 13:57:44.909592   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:44.910080   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find current IP address of domain addons-566823 in network mk-addons-566823
	I0723 13:57:44.910103   19502 main.go:141] libmachine: (addons-566823) DBG | I0723 13:57:44.910036   19524 retry.go:31] will retry after 5.185969246s: waiting for machine to come up
	I0723 13:57:50.100167   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.100556   19502 main.go:141] libmachine: (addons-566823) Found IP for machine: 192.168.39.114
	I0723 13:57:50.100580   19502 main.go:141] libmachine: (addons-566823) Reserving static IP address...
	I0723 13:57:50.100593   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has current primary IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.100944   19502 main.go:141] libmachine: (addons-566823) DBG | unable to find host DHCP lease matching {name: "addons-566823", mac: "52:54:00:41:2b:ac", ip: "192.168.39.114"} in network mk-addons-566823
	I0723 13:57:50.171662   19502 main.go:141] libmachine: (addons-566823) DBG | Getting to WaitForSSH function...
	I0723 13:57:50.171687   19502 main.go:141] libmachine: (addons-566823) Reserved static IP address: 192.168.39.114
	I0723 13:57:50.171700   19502 main.go:141] libmachine: (addons-566823) Waiting for SSH to be available...
	I0723 13:57:50.174271   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.174718   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.174754   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.174975   19502 main.go:141] libmachine: (addons-566823) DBG | Using SSH client type: external
	I0723 13:57:50.175018   19502 main.go:141] libmachine: (addons-566823) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa (-rw-------)
	I0723 13:57:50.175047   19502 main.go:141] libmachine: (addons-566823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 13:57:50.175064   19502 main.go:141] libmachine: (addons-566823) DBG | About to run SSH command:
	I0723 13:57:50.175101   19502 main.go:141] libmachine: (addons-566823) DBG | exit 0
	I0723 13:57:50.302235   19502 main.go:141] libmachine: (addons-566823) DBG | SSH cmd err, output: <nil>: 
	I0723 13:57:50.302531   19502 main.go:141] libmachine: (addons-566823) KVM machine creation complete!
	I0723 13:57:50.302848   19502 main.go:141] libmachine: (addons-566823) Calling .GetConfigRaw
	I0723 13:57:50.303333   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:50.303574   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:50.303763   19502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 13:57:50.303779   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:57:50.305020   19502 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 13:57:50.305035   19502 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 13:57:50.305042   19502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 13:57:50.305047   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.307430   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.307793   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.307820   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.307919   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.308122   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.308268   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.308429   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.308670   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.308880   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.308894   19502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 13:57:50.405582   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 13:57:50.405605   19502 main.go:141] libmachine: Detecting the provisioner...
	I0723 13:57:50.405614   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.408642   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.408967   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.408991   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.409164   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.409346   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.409545   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.409678   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.409834   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.410027   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.410039   19502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 13:57:50.506663   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 13:57:50.506763   19502 main.go:141] libmachine: found compatible host: buildroot
	I0723 13:57:50.506778   19502 main.go:141] libmachine: Provisioning with buildroot...
	I0723 13:57:50.506789   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:50.507035   19502 buildroot.go:166] provisioning hostname "addons-566823"
	I0723 13:57:50.507059   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:50.507262   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.510208   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.510607   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.510633   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.510801   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.510976   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.511110   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.511237   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.511415   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.511582   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.511595   19502 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-566823 && echo "addons-566823" | sudo tee /etc/hostname
	I0723 13:57:50.624287   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-566823
	
	I0723 13:57:50.624316   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.626776   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.627128   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.627156   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.627361   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.627544   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.627770   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.627943   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.628110   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:50.628279   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:50.628302   19502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-566823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-566823/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-566823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 13:57:50.734982   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 13:57:50.735008   19502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 13:57:50.735031   19502 buildroot.go:174] setting up certificates
	I0723 13:57:50.735044   19502 provision.go:84] configureAuth start
	I0723 13:57:50.735056   19502 main.go:141] libmachine: (addons-566823) Calling .GetMachineName
	I0723 13:57:50.735334   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:50.738308   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.738817   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.738841   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.739019   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.741385   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.741700   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.741718   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.741868   19502 provision.go:143] copyHostCerts
	I0723 13:57:50.741937   19502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 13:57:50.742064   19502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 13:57:50.742145   19502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 13:57:50.742207   19502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.addons-566823 san=[127.0.0.1 192.168.39.114 addons-566823 localhost minikube]
	I0723 13:57:50.871458   19502 provision.go:177] copyRemoteCerts
	I0723 13:57:50.871532   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 13:57:50.871560   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:50.874470   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.874754   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:50.874783   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:50.874931   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:50.875098   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:50.875240   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:50.875343   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:50.952409   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 13:57:50.974842   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 13:57:50.996745   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 13:57:51.021093   19502 provision.go:87] duration metric: took 286.036544ms to configureAuth
	I0723 13:57:51.021119   19502 buildroot.go:189] setting minikube options for container-runtime
	I0723 13:57:51.021285   19502 config.go:182] Loaded profile config "addons-566823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 13:57:51.021371   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.023995   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.024327   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.024353   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.024542   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.024810   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.024999   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.025156   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.025404   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:51.025563   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:51.025580   19502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 13:57:51.273761   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 13:57:51.273788   19502 main.go:141] libmachine: Checking connection to Docker...
	I0723 13:57:51.273800   19502 main.go:141] libmachine: (addons-566823) Calling .GetURL
	I0723 13:57:51.275209   19502 main.go:141] libmachine: (addons-566823) DBG | Using libvirt version 6000000
	I0723 13:57:51.277390   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.277733   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.277750   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.277986   19502 main.go:141] libmachine: Docker is up and running!
	I0723 13:57:51.278007   19502 main.go:141] libmachine: Reticulating splines...
	I0723 13:57:51.278014   19502 client.go:171] duration metric: took 24.935997246s to LocalClient.Create
	I0723 13:57:51.278041   19502 start.go:167] duration metric: took 24.936063055s to libmachine.API.Create "addons-566823"
	I0723 13:57:51.278051   19502 start.go:293] postStartSetup for "addons-566823" (driver="kvm2")
	I0723 13:57:51.278061   19502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 13:57:51.278077   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.278461   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 13:57:51.278484   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.280896   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.281145   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.281177   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.281317   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.281507   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.281653   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.281782   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:51.360282   19502 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 13:57:51.364398   19502 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 13:57:51.364421   19502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 13:57:51.364501   19502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 13:57:51.364548   19502 start.go:296] duration metric: took 86.489306ms for postStartSetup
	I0723 13:57:51.364586   19502 main.go:141] libmachine: (addons-566823) Calling .GetConfigRaw
	I0723 13:57:51.365074   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:51.367613   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.367951   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.367980   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.368199   19502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/config.json ...
	I0723 13:57:51.368388   19502 start.go:128] duration metric: took 25.044188254s to createHost
	I0723 13:57:51.368412   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.370626   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.370878   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.370904   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.371084   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.371250   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.371417   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.371531   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.371681   19502 main.go:141] libmachine: Using SSH client type: native
	I0723 13:57:51.371831   19502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0723 13:57:51.371845   19502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 13:57:51.470736   19502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721743071.449258583
	
	I0723 13:57:51.470761   19502 fix.go:216] guest clock: 1721743071.449258583
	I0723 13:57:51.470769   19502 fix.go:229] Guest: 2024-07-23 13:57:51.449258583 +0000 UTC Remote: 2024-07-23 13:57:51.368400792 +0000 UTC m=+25.142952707 (delta=80.857791ms)
	I0723 13:57:51.470787   19502 fix.go:200] guest clock delta is within tolerance: 80.857791ms
	I0723 13:57:51.470793   19502 start.go:83] releasing machines lock for "addons-566823", held for 25.146690322s
	I0723 13:57:51.470818   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.471104   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:51.473941   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.474452   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.474470   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.474680   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.475226   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.475420   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:57:51.475514   19502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 13:57:51.475564   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.475677   19502 ssh_runner.go:195] Run: cat /version.json
	I0723 13:57:51.475704   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:57:51.478452   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.478557   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.478819   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.478850   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.478948   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.478950   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:51.478984   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:51.479100   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.479163   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:57:51.479243   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.479332   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:57:51.479390   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:51.479462   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:57:51.479607   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:57:51.550829   19502 ssh_runner.go:195] Run: systemctl --version
	I0723 13:57:51.584947   19502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 13:57:51.738932   19502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 13:57:51.744575   19502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 13:57:51.744639   19502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 13:57:51.759140   19502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 13:57:51.759164   19502 start.go:495] detecting cgroup driver to use...
	I0723 13:57:51.759218   19502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 13:57:51.779838   19502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 13:57:51.793147   19502 docker.go:217] disabling cri-docker service (if available) ...
	I0723 13:57:51.793195   19502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 13:57:51.805781   19502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 13:57:51.818438   19502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 13:57:51.923193   19502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 13:57:52.046606   19502 docker.go:233] disabling docker service ...
	I0723 13:57:52.046668   19502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 13:57:52.060915   19502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 13:57:52.073705   19502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 13:57:52.215736   19502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 13:57:52.326953   19502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 13:57:52.341293   19502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 13:57:52.358731   19502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 13:57:52.358801   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.368726   19502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 13:57:52.368821   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.378911   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.388508   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.398355   19502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 13:57:52.407985   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.417845   19502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.433589   19502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 13:57:52.443392   19502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 13:57:52.452658   19502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 13:57:52.452737   19502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 13:57:52.466357   19502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 13:57:52.475851   19502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 13:57:52.591390   19502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 13:57:52.722695   19502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 13:57:52.722782   19502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 13:57:52.726976   19502 start.go:563] Will wait 60s for crictl version
	I0723 13:57:52.727039   19502 ssh_runner.go:195] Run: which crictl
	I0723 13:57:52.730321   19502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 13:57:52.766023   19502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 13:57:52.766144   19502 ssh_runner.go:195] Run: crio --version
	I0723 13:57:52.791208   19502 ssh_runner.go:195] Run: crio --version
	I0723 13:57:52.817964   19502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 13:57:52.819330   19502 main.go:141] libmachine: (addons-566823) Calling .GetIP
	I0723 13:57:52.821772   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:52.822119   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:57:52.822145   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:57:52.822373   19502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 13:57:52.826252   19502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 13:57:52.837740   19502 kubeadm.go:883] updating cluster {Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 13:57:52.837835   19502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 13:57:52.837876   19502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 13:57:52.868970   19502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 13:57:52.869040   19502 ssh_runner.go:195] Run: which lz4
	I0723 13:57:52.872752   19502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 13:57:52.876744   19502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 13:57:52.876774   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 13:57:54.052206   19502 crio.go:462] duration metric: took 1.179478604s to copy over tarball
	I0723 13:57:54.052283   19502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 13:57:56.274956   19502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.222640378s)
	I0723 13:57:56.274986   19502 crio.go:469] duration metric: took 2.222757664s to extract the tarball
	I0723 13:57:56.274994   19502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 13:57:56.318004   19502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 13:57:56.356951   19502 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 13:57:56.356975   19502 cache_images.go:84] Images are preloaded, skipping loading
	I0723 13:57:56.356983   19502 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.30.3 crio true true} ...
	I0723 13:57:56.357081   19502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-566823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 13:57:56.357146   19502 ssh_runner.go:195] Run: crio config
	I0723 13:57:56.412554   19502 cni.go:84] Creating CNI manager for ""
	I0723 13:57:56.412578   19502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:57:56.412587   19502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 13:57:56.412607   19502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-566823 NodeName:addons-566823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 13:57:56.412748   19502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-566823"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 13:57:56.412821   19502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 13:57:56.422155   19502 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 13:57:56.422220   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 13:57:56.431010   19502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0723 13:57:56.446690   19502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 13:57:56.462055   19502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0723 13:57:56.477917   19502 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I0723 13:57:56.481648   19502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 13:57:56.492533   19502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 13:57:56.601403   19502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 13:57:56.616573   19502 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823 for IP: 192.168.39.114
	I0723 13:57:56.616599   19502 certs.go:194] generating shared ca certs ...
	I0723 13:57:56.616618   19502 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.616787   19502 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 13:57:56.785134   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt ...
	I0723 13:57:56.785160   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt: {Name:mk36e09d7ac6dd29f323e105c718380c8b560655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.785312   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key ...
	I0723 13:57:56.785323   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key: {Name:mk5bb118f835953a95454c83f6da991c61082a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.785388   19502 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 13:57:56.977261   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt ...
	I0723 13:57:56.977289   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt: {Name:mkbb8d91dd4e6e1519ac2b5cb44d6ea526cac429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.977443   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key ...
	I0723 13:57:56.977453   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key: {Name:mk2b563123a7ab0f3949cbb2747ecfbeb56e3787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:56.977519   19502 certs.go:256] generating profile certs ...
	I0723 13:57:56.977567   19502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.key
	I0723 13:57:56.977579   19502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt with IP's: []
	I0723 13:57:57.158641   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt ...
	I0723 13:57:57.158676   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: {Name:mkb0b599bc3001e92419b5765ab8147765f8a443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.158854   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.key ...
	I0723 13:57:57.158866   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.key: {Name:mk564281decac921298dfd4cb0f95eec8dcd82fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.158941   19502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37
	I0723 13:57:57.158962   19502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114]
	I0723 13:57:57.273104   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37 ...
	I0723 13:57:57.273141   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37: {Name:mk688de0645539df463633501160ac13657adeb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.273314   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37 ...
	I0723 13:57:57.273328   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37: {Name:mk546ea975be19c9ea55e5a690a20c03fc692153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.273405   19502 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt.80625e37 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt
	I0723 13:57:57.273481   19502 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key.80625e37 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key
	I0723 13:57:57.273533   19502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key
	I0723 13:57:57.273552   19502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt with IP's: []
	I0723 13:57:57.621318   19502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt ...
	I0723 13:57:57.621350   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt: {Name:mkedd6d6ace9f091aa971fec0c1f4d45184621c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.621515   19502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key ...
	I0723 13:57:57.621526   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key: {Name:mk551dbe38aa839bf357f2e08713ad68f188b641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:57:57.621697   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 13:57:57.621732   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 13:57:57.621758   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 13:57:57.621784   19502 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 13:57:57.622333   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 13:57:57.646802   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 13:57:57.677287   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 13:57:57.700767   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 13:57:57.723764   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0723 13:57:57.745931   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 13:57:57.768798   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 13:57:57.791556   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 13:57:57.814057   19502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 13:57:57.836303   19502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 13:57:57.851639   19502 ssh_runner.go:195] Run: openssl version
	I0723 13:57:57.857262   19502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 13:57:57.867796   19502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 13:57:57.871949   19502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 13:57:57.872006   19502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 13:57:57.877811   19502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 13:57:57.888462   19502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 13:57:57.892983   19502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 13:57:57.893036   19502 kubeadm.go:392] StartCluster: {Name:addons-566823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-566823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 13:57:57.893117   19502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 13:57:57.893176   19502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 13:57:57.932055   19502 cri.go:89] found id: ""
	I0723 13:57:57.932120   19502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 13:57:57.944148   19502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 13:57:57.969796   19502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 13:57:57.982882   19502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 13:57:57.982905   19502 kubeadm.go:157] found existing configuration files:
	
	I0723 13:57:57.982948   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 13:57:57.998899   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 13:57:57.998962   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 13:57:58.008834   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 13:57:58.017800   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 13:57:58.017860   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 13:57:58.027707   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 13:57:58.037279   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 13:57:58.037333   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 13:57:58.047150   19502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 13:57:58.056500   19502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 13:57:58.056565   19502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 13:57:58.066135   19502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 13:57:58.260519   19502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 13:58:08.472829   19502 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 13:58:08.472922   19502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 13:58:08.472991   19502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 13:58:08.473126   19502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 13:58:08.473237   19502 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 13:58:08.473332   19502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 13:58:08.475130   19502 out.go:204]   - Generating certificates and keys ...
	I0723 13:58:08.475209   19502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 13:58:08.475286   19502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 13:58:08.475349   19502 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 13:58:08.475397   19502 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 13:58:08.475448   19502 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 13:58:08.475491   19502 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 13:58:08.475544   19502 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 13:58:08.475719   19502 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-566823 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0723 13:58:08.475781   19502 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 13:58:08.475881   19502 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-566823 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0723 13:58:08.475935   19502 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 13:58:08.475992   19502 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 13:58:08.476030   19502 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 13:58:08.476127   19502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 13:58:08.476189   19502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 13:58:08.476236   19502 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 13:58:08.476285   19502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 13:58:08.476348   19502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 13:58:08.476396   19502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 13:58:08.476468   19502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 13:58:08.476527   19502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 13:58:08.478016   19502 out.go:204]   - Booting up control plane ...
	I0723 13:58:08.478106   19502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 13:58:08.478171   19502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 13:58:08.478227   19502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 13:58:08.478324   19502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 13:58:08.478430   19502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 13:58:08.478482   19502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 13:58:08.478635   19502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 13:58:08.478737   19502 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 13:58:08.478823   19502 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 515.057931ms
	I0723 13:58:08.478893   19502 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 13:58:08.478946   19502 kubeadm.go:310] [api-check] The API server is healthy after 5.002578534s
	I0723 13:58:08.479033   19502 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 13:58:08.479139   19502 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 13:58:08.479202   19502 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 13:58:08.479482   19502 kubeadm.go:310] [mark-control-plane] Marking the node addons-566823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 13:58:08.479568   19502 kubeadm.go:310] [bootstrap-token] Using token: uyhqod.zgrugty1wvig1w59
	I0723 13:58:08.481081   19502 out.go:204]   - Configuring RBAC rules ...
	I0723 13:58:08.481205   19502 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 13:58:08.481307   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 13:58:08.481486   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 13:58:08.481658   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 13:58:08.481758   19502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 13:58:08.481873   19502 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 13:58:08.482037   19502 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 13:58:08.482108   19502 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 13:58:08.482161   19502 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 13:58:08.482167   19502 kubeadm.go:310] 
	I0723 13:58:08.482215   19502 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 13:58:08.482221   19502 kubeadm.go:310] 
	I0723 13:58:08.482287   19502 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 13:58:08.482295   19502 kubeadm.go:310] 
	I0723 13:58:08.482327   19502 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 13:58:08.482391   19502 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 13:58:08.482468   19502 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 13:58:08.482476   19502 kubeadm.go:310] 
	I0723 13:58:08.482535   19502 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 13:58:08.482551   19502 kubeadm.go:310] 
	I0723 13:58:08.482615   19502 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 13:58:08.482628   19502 kubeadm.go:310] 
	I0723 13:58:08.482677   19502 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 13:58:08.482741   19502 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 13:58:08.482824   19502 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 13:58:08.482833   19502 kubeadm.go:310] 
	I0723 13:58:08.482947   19502 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 13:58:08.483059   19502 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 13:58:08.483067   19502 kubeadm.go:310] 
	I0723 13:58:08.483154   19502 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uyhqod.zgrugty1wvig1w59 \
	I0723 13:58:08.483266   19502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 13:58:08.483298   19502 kubeadm.go:310] 	--control-plane 
	I0723 13:58:08.483306   19502 kubeadm.go:310] 
	I0723 13:58:08.483421   19502 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 13:58:08.483430   19502 kubeadm.go:310] 
	I0723 13:58:08.483546   19502 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uyhqod.zgrugty1wvig1w59 \
	I0723 13:58:08.483713   19502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 13:58:08.483732   19502 cni.go:84] Creating CNI manager for ""
	I0723 13:58:08.483741   19502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:58:08.485463   19502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 13:58:08.486819   19502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 13:58:08.497439   19502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 13:58:08.516918   19502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 13:58:08.516985   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:08.517046   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-566823 minikube.k8s.io/updated_at=2024_07_23T13_58_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=addons-566823 minikube.k8s.io/primary=true
	I0723 13:58:08.537181   19502 ops.go:34] apiserver oom_adj: -16
	I0723 13:58:08.633215   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:09.134020   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:09.633460   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:10.133970   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:10.633381   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:11.133664   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:11.633388   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:12.133248   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:12.634254   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:13.134061   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:13.634245   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:14.133413   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:14.633421   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:15.133822   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:15.633240   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:16.133347   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:16.634079   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:17.133700   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:17.633697   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:18.133777   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:18.633494   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:19.133930   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:19.633957   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:20.133368   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:20.633957   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:21.133510   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:21.634019   19502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 13:58:21.728145   19502 kubeadm.go:1113] duration metric: took 13.211219421s to wait for elevateKubeSystemPrivileges
	I0723 13:58:21.728174   19502 kubeadm.go:394] duration metric: took 23.835142379s to StartCluster
	I0723 13:58:21.728194   19502 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:58:21.728327   19502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 13:58:21.728966   19502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 13:58:21.729216   19502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0723 13:58:21.729246   19502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 13:58:21.729290   19502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0723 13:58:21.729401   19502 addons.go:69] Setting yakd=true in profile "addons-566823"
	I0723 13:58:21.729433   19502 addons.go:234] Setting addon yakd=true in "addons-566823"
	I0723 13:58:21.729468   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729522   19502 addons.go:69] Setting inspektor-gadget=true in profile "addons-566823"
	I0723 13:58:21.729539   19502 addons.go:69] Setting storage-provisioner=true in profile "addons-566823"
	I0723 13:58:21.729562   19502 addons.go:234] Setting addon inspektor-gadget=true in "addons-566823"
	I0723 13:58:21.729566   19502 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-566823"
	I0723 13:58:21.729584   19502 addons.go:69] Setting registry=true in profile "addons-566823"
	I0723 13:58:21.729576   19502 addons.go:69] Setting volcano=true in profile "addons-566823"
	I0723 13:58:21.729603   19502 addons.go:234] Setting addon registry=true in "addons-566823"
	I0723 13:58:21.729609   19502 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-566823"
	I0723 13:58:21.729616   19502 addons.go:69] Setting metrics-server=true in profile "addons-566823"
	I0723 13:58:21.729622   19502 addons.go:234] Setting addon volcano=true in "addons-566823"
	I0723 13:58:21.729625   19502 addons.go:69] Setting helm-tiller=true in profile "addons-566823"
	I0723 13:58:21.729637   19502 addons.go:234] Setting addon metrics-server=true in "addons-566823"
	I0723 13:58:21.729640   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729650   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729657   19502 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-566823"
	I0723 13:58:21.729661   19502 addons.go:69] Setting gcp-auth=true in profile "addons-566823"
	I0723 13:58:21.729667   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729675   19502 mustload.go:65] Loading cluster: addons-566823
	I0723 13:58:21.729677   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729600   19502 config.go:182] Loaded profile config "addons-566823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 13:58:21.729604   19502 addons.go:69] Setting cloud-spanner=true in profile "addons-566823"
	I0723 13:58:21.729804   19502 addons.go:234] Setting addon cloud-spanner=true in "addons-566823"
	I0723 13:58:21.729828   19502 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-566823"
	I0723 13:58:21.729855   19502 addons.go:69] Setting ingress=true in profile "addons-566823"
	I0723 13:58:21.729874   19502 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-566823"
	I0723 13:58:21.729881   19502 config.go:182] Loaded profile config "addons-566823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 13:58:21.729895   19502 addons.go:234] Setting addon ingress=true in "addons-566823"
	I0723 13:58:21.729922   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.729953   19502 addons.go:69] Setting ingress-dns=true in profile "addons-566823"
	I0723 13:58:21.729981   19502 addons.go:234] Setting addon ingress-dns=true in "addons-566823"
	I0723 13:58:21.729990   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730005   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.730026   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730031   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730031   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730028   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730063   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730088   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730120   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.729833   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.730223   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730279   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730317   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730351   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730364   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730406   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.729597   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.730533   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.730558   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.730071   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.729930   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729575   19502 addons.go:234] Setting addon storage-provisioner=true in "addons-566823"
	I0723 13:58:21.729652   19502 addons.go:69] Setting default-storageclass=true in profile "addons-566823"
	I0723 13:58:21.731125   19502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-566823"
	I0723 13:58:21.729604   19502 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-566823"
	I0723 13:58:21.731199   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729943   19502 addons.go:69] Setting volumesnapshots=true in profile "addons-566823"
	I0723 13:58:21.731324   19502 addons.go:234] Setting addon volumesnapshots=true in "addons-566823"
	I0723 13:58:21.731365   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.731508   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.731532   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.731604   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.731639   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.731677   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.731693   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.731131   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.729644   19502 addons.go:234] Setting addon helm-tiller=true in "addons-566823"
	I0723 13:58:21.732027   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.732087   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.732118   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.742498   19502 out.go:177] * Verifying Kubernetes components...
	I0723 13:58:21.744203   19502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 13:58:21.751122   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0723 13:58:21.751612   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I0723 13:58:21.751814   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.752225   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.752774   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.752792   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.752965   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.752998   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.753070   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I0723 13:58:21.753567   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.753633   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.753690   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.754177   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.754194   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.754257   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.754290   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.754937   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.754971   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.755123   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.755718   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I0723 13:58:21.760804   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0723 13:58:21.761285   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.761852   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.761878   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.762264   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.762455   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.762911   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I0723 13:58:21.763418   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.763964   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.763984   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.764424   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.764595   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.764901   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.764929   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.765020   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.766674   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.766698   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.766798   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.766823   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.767115   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.767139   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.767333   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.767369   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.767972   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.769049   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.769068   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.769133   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0723 13:58:21.771008   19502 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-566823"
	I0723 13:58:21.771050   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.771408   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.771425   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.771709   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.771806   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.772027   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.772902   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.772918   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.774628   19502 addons.go:234] Setting addon default-storageclass=true in "addons-566823"
	I0723 13:58:21.774667   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:21.775005   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.775032   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.775555   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.776087   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.776119   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.785003   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0723 13:58:21.814755   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.814942   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I0723 13:58:21.815078   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0723 13:58:21.815329   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0723 13:58:21.815421   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I0723 13:58:21.815619   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0723 13:58:21.815755   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0723 13:58:21.815887   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41737
	I0723 13:58:21.816222   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0723 13:58:21.816289   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0723 13:58:21.816405   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0723 13:58:21.816502   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0723 13:58:21.816850   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.816935   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817006   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817073   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817414   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817434   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817580   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817588   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817705   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817714   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817766   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.817894   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.817903   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.817951   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.818435   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.818585   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.818769   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.818781   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.818915   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.818933   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.819086   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.819122   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.819180   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.819224   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.819305   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.819977   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.820086   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.820153   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.820167   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.820193   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.820266   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.820320   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.820273   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.820374   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.820376   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.820283   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.820421   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.820662   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.820688   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.821111   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.821212   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.821266   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.821298   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.821412   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.821433   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.821497   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.821673   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.821687   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.821772   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.821797   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.821976   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.822008   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.822602   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.822770   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.822833   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.822883   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.822928   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.823055   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.823066   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.824349   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.824356   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0723 13:58:21.824878   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.824917   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.825750   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.826874   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.826904   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.827015   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0723 13:58:21.827356   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.827378   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.827420   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.827899   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.827958   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.828027   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.828058   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.828805   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.829390   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.829571   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I0723 13:58:21.829754   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.829939   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0723 13:58:21.830202   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.830857   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.831188   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.832533   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.833453   19502 out.go:177]   - Using image docker.io/registry:2.8.3
	I0723 13:58:21.833569   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.833591   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.833908   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0723 13:58:21.834873   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.835492   19502 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0723 13:58:21.836567   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0723 13:58:21.836797   19502 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0723 13:58:21.836809   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0723 13:58:21.836826   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.839914   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0723 13:58:21.841432   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.841613   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.841781   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.842007   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.843334   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.843363   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.843384   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.843651   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0723 13:58:21.844501   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I0723 13:58:21.844882   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.845025   19502 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0723 13:58:21.845364   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.845387   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.845734   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.846269   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0723 13:58:21.846275   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:21.846312   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:21.846493   19502 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 13:58:21.846510   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0723 13:58:21.846529   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.849798   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.850095   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0723 13:58:21.850241   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.850276   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.850466   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.850624   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.850796   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.850943   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.851836   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0723 13:58:21.851852   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0723 13:58:21.851866   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.852215   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0723 13:58:21.852837   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.853617   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.853638   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.854437   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.854713   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.855296   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.858535   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.858549   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.858566   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.858816   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.858978   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.859117   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.862687   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.865002   19502 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0723 13:58:21.865425   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0723 13:58:21.865864   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.866339   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.866355   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.866468   19502 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0723 13:58:21.866481   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0723 13:58:21.866496   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.867106   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.867849   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.868235   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0723 13:58:21.868723   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.869363   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.869379   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.869982   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.870132   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.870267   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.871818   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.872267   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.872260   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.872291   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.872432   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.872589   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.872733   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.872867   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.874065   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0723 13:58:21.874368   19502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 13:58:21.874430   19502 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0723 13:58:21.875214   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.875723   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.875740   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.876106   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.876297   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.876412   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0723 13:58:21.876712   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.877138   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.877155   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.877451   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.877622   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.877788   19502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 13:58:21.877810   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 13:58:21.877826   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.877904   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.878334   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0723 13:58:21.878351   19502 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0723 13:58:21.878367   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.881125   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.881435   19502 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0723 13:58:21.881631   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I0723 13:58:21.882015   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.882433   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.882589   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.882607   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.882682   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.882709   19502 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0723 13:58:21.882732   19502 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0723 13:58:21.882756   19502 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0723 13:58:21.882772   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.883077   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.883152   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.883461   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.883481   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.883538   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.883780   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.883842   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.883890   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40261
	I0723 13:58:21.884022   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.884189   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 13:58:21.884203   19502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 13:58:21.884220   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.884341   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.884942   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.885068   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.885077   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.885295   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.885362   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.885477   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.885661   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.885738   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45415
	I0723 13:58:21.885784   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.886274   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.886405   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.886683   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.886806   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.886824   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.887260   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.887273   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.887644   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.887717   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.889106   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.889125   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.889316   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.889374   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.889559   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.889758   19502 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0723 13:58:21.889822   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.890020   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.890276   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.890293   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.890333   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.890417   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.890422   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0723 13:58:21.890661   19502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 13:58:21.890679   19502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 13:58:21.890694   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.890748   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.890898   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.891050   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.891356   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.891508   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0723 13:58:21.891524   19502 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0723 13:58:21.891540   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.891589   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.892194   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.892214   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.892606   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.892803   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.893006   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 13:58:21.893856   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42233
	I0723 13:58:21.894305   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.894487   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.894620   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.894945   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.894976   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.895297   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.895106   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0723 13:58:21.895204   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.895493   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.895509   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.895590   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.895627   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.895715   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.895724   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 13:58:21.895833   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.896019   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.896119   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.896127   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.896258   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.896384   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.896563   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.896590   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.896663   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.897087   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.897100   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:21.897112   19502 out.go:177]   - Using image docker.io/busybox:stable
	I0723 13:58:21.897494   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.897718   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.898334   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0723 13:58:21.898897   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.899133   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:21.899149   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:21.899306   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:21.899319   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:21.899328   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:21.899334   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:21.899492   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:21.899506   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	W0723 13:58:21.899579   19502 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0723 13:58:21.899772   19502 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 13:58:21.899788   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0723 13:58:21.899800   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.900198   19502 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0723 13:58:21.901623   19502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 13:58:21.901637   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0723 13:58:21.901647   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.901748   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.902916   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.903305   19502 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0723 13:58:21.903471   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.903489   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.903595   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.903783   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.903925   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.904094   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.904898   19502 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0723 13:58:21.904911   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0723 13:58:21.904922   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.904995   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.905362   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.905391   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.905685   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.905884   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.906025   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.906151   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.907492   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.907794   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.907816   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.907853   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0723 13:58:21.908009   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.908176   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.908187   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:21.908269   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.908349   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:21.908835   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:21.908856   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	W0723 13:58:21.908983   19502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39210->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:21.909002   19502 retry.go:31] will retry after 169.494817ms: ssh: handshake failed: read tcp 192.168.39.1:39210->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:21.909315   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:21.909472   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:21.911153   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:21.913154   19502 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0723 13:58:21.914697   19502 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 13:58:21.914716   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0723 13:58:21.914733   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:21.917114   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.917414   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:21.917445   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:21.917662   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:21.917835   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:21.917970   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:21.918076   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	W0723 13:58:21.926120   19502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39224->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:21.926150   19502 retry.go:31] will retry after 313.981963ms: ssh: handshake failed: read tcp 192.168.39.1:39224->192.168.39.114:22: read: connection reset by peer
	W0723 13:58:22.079639   19502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39236->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:22.079665   19502 retry.go:31] will retry after 539.540893ms: ssh: handshake failed: read tcp 192.168.39.1:39236->192.168.39.114:22: read: connection reset by peer
	I0723 13:58:22.202176   19502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 13:58:22.202245   19502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0723 13:58:22.230801   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 13:58:22.311235   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 13:58:22.311270   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0723 13:58:22.324790   19502 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0723 13:58:22.324818   19502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0723 13:58:22.327779   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 13:58:22.345388   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0723 13:58:22.345414   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0723 13:58:22.355748   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0723 13:58:22.372003   19502 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0723 13:58:22.372029   19502 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0723 13:58:22.376325   19502 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0723 13:58:22.376350   19502 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0723 13:58:22.437458   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 13:58:22.448940   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 13:58:22.448965   19502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 13:58:22.479853   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 13:58:22.481228   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0723 13:58:22.481247   19502 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0723 13:58:22.485486   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 13:58:22.518685   19502 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0723 13:58:22.518710   19502 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0723 13:58:22.534869   19502 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0723 13:58:22.534888   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0723 13:58:22.546789   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0723 13:58:22.546816   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0723 13:58:22.575320   19502 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0723 13:58:22.575344   19502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0723 13:58:22.602410   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 13:58:22.657250   19502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 13:58:22.657271   19502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 13:58:22.674093   19502 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0723 13:58:22.674117   19502 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0723 13:58:22.676065   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0723 13:58:22.676082   19502 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0723 13:58:22.701679   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0723 13:58:22.785069   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0723 13:58:22.785098   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0723 13:58:22.815135   19502 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0723 13:58:22.815172   19502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0723 13:58:22.958101   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0723 13:58:22.958124   19502 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0723 13:58:22.966549   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 13:58:22.978592   19502 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0723 13:58:22.978613   19502 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0723 13:58:23.024720   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0723 13:58:23.024744   19502 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0723 13:58:23.100506   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0723 13:58:23.100540   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0723 13:58:23.157324   19502 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0723 13:58:23.157349   19502 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0723 13:58:23.248309   19502 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 13:58:23.248337   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0723 13:58:23.250488   19502 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0723 13:58:23.250510   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0723 13:58:23.360492   19502 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0723 13:58:23.360512   19502 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0723 13:58:23.429353   19502 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0723 13:58:23.429382   19502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0723 13:58:23.465635   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0723 13:58:23.524673   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 13:58:23.546884   19502 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0723 13:58:23.546911   19502 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0723 13:58:23.633648   19502 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0723 13:58:23.633676   19502 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0723 13:58:23.647703   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0723 13:58:23.647725   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0723 13:58:23.769358   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0723 13:58:23.866316   19502 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 13:58:23.866350   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0723 13:58:23.867719   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0723 13:58:23.867735   19502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0723 13:58:23.953525   19502 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.751244478s)
	I0723 13:58:23.953562   19502 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0723 13:58:23.953563   19502 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.751356542s)
	I0723 13:58:23.954217   19502 node_ready.go:35] waiting up to 6m0s for node "addons-566823" to be "Ready" ...
	I0723 13:58:23.961880   19502 node_ready.go:49] node "addons-566823" has status "Ready":"True"
	I0723 13:58:23.961905   19502 node_ready.go:38] duration metric: took 7.623495ms for node "addons-566823" to be "Ready" ...
	I0723 13:58:23.961912   19502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 13:58:23.994410   19502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:24.072223   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 13:58:24.167818   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0723 13:58:24.167842   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0723 13:58:24.401655   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0723 13:58:24.401680   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0723 13:58:24.457418   19502 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-566823" context rescaled to 1 replicas
	I0723 13:58:24.752279   19502 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 13:58:24.752312   19502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0723 13:58:25.069272   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 13:58:26.146944   19502 pod_ready.go:102] pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace has status "Ready":"False"
	I0723 13:58:26.374629   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.143794582s)
	I0723 13:58:26.374675   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.374686   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.374733   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.046925928s)
	I0723 13:58:26.374770   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.018998287s)
	I0723 13:58:26.374778   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.374787   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.374790   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.374795   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375101   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375167   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375176   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375184   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.375185   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375194   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375194   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.375243   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375647   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375664   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375679   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375678   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375688   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.375693   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375696   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375705   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375714   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.375722   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.375729   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.375969   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:26.375996   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.376004   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:26.619278   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:26.619301   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:26.619685   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:26.619707   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:28.620840   19502 pod_ready.go:92] pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.620872   19502 pod_ready.go:81] duration metric: took 4.626433715s for pod "coredns-7db6d8ff4d-4zjr6" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.620885   19502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhdm4" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.736222   19502 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhdm4" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.736254   19502 pod_ready.go:81] duration metric: took 115.361023ms for pod "coredns-7db6d8ff4d-jhdm4" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.736266   19502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.806591   19502 pod_ready.go:92] pod "etcd-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.806617   19502 pod_ready.go:81] duration metric: took 70.343575ms for pod "etcd-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.806631   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.859260   19502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0723 13:58:28.859310   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:28.862421   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:28.862967   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:28.862999   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:28.863138   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:28.863353   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:28.863546   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:28.863683   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:28.890591   19502 pod_ready.go:92] pod "kube-apiserver-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.890616   19502 pod_ready.go:81] duration metric: took 83.97937ms for pod "kube-apiserver-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.890627   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.935088   19502 pod_ready.go:92] pod "kube-controller-manager-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.935111   19502 pod_ready.go:81] duration metric: took 44.475142ms for pod "kube-controller-manager-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.935125   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhm7l" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.980609   19502 pod_ready.go:92] pod "kube-proxy-dhm7l" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:28.980630   19502 pod_ready.go:81] duration metric: took 45.499372ms for pod "kube-proxy-dhm7l" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:28.980640   19502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:29.136374   19502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0723 13:58:29.208595   19502 addons.go:234] Setting addon gcp-auth=true in "addons-566823"
	I0723 13:58:29.208655   19502 host.go:66] Checking if "addons-566823" exists ...
	I0723 13:58:29.209090   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:29.209134   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:29.224251   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0723 13:58:29.224692   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:29.225164   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:29.225181   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:29.225541   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:29.225997   19502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 13:58:29.226021   19502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 13:58:29.242031   19502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I0723 13:58:29.242487   19502 main.go:141] libmachine: () Calling .GetVersion
	I0723 13:58:29.242959   19502 main.go:141] libmachine: Using API Version  1
	I0723 13:58:29.242981   19502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 13:58:29.243280   19502 main.go:141] libmachine: () Calling .GetMachineName
	I0723 13:58:29.243490   19502 main.go:141] libmachine: (addons-566823) Calling .GetState
	I0723 13:58:29.245040   19502 main.go:141] libmachine: (addons-566823) Calling .DriverName
	I0723 13:58:29.245261   19502 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0723 13:58:29.245287   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHHostname
	I0723 13:58:29.248182   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:29.248644   19502 main.go:141] libmachine: (addons-566823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:ac", ip: ""} in network mk-addons-566823: {Iface:virbr1 ExpiryTime:2024-07-23 14:57:40 +0000 UTC Type:0 Mac:52:54:00:41:2b:ac Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-566823 Clientid:01:52:54:00:41:2b:ac}
	I0723 13:58:29.248674   19502 main.go:141] libmachine: (addons-566823) DBG | domain addons-566823 has defined IP address 192.168.39.114 and MAC address 52:54:00:41:2b:ac in network mk-addons-566823
	I0723 13:58:29.248828   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHPort
	I0723 13:58:29.249043   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHKeyPath
	I0723 13:58:29.249209   19502 main.go:141] libmachine: (addons-566823) Calling .GetSSHUsername
	I0723 13:58:29.249406   19502 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/addons-566823/id_rsa Username:docker}
	I0723 13:58:29.305516   19502 pod_ready.go:92] pod "kube-scheduler-addons-566823" in "kube-system" namespace has status "Ready":"True"
	I0723 13:58:29.305545   19502 pod_ready.go:81] duration metric: took 324.897765ms for pod "kube-scheduler-addons-566823" in "kube-system" namespace to be "Ready" ...
	I0723 13:58:29.305555   19502 pod_ready.go:38] duration metric: took 5.343633359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 13:58:29.305574   19502 api_server.go:52] waiting for apiserver process to appear ...
	I0723 13:58:29.305651   19502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 13:58:29.853104   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.415607729s)
	I0723 13:58:29.853144   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.373261095s)
	I0723 13:58:29.853161   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853173   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853181   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853201   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853288   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.367769604s)
	I0723 13:58:29.853319   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853335   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853372   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.151664239s)
	I0723 13:58:29.853402   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853419   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853321   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.250882474s)
	I0723 13:58:29.853502   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853517   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853521   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.886943599s)
	I0723 13:58:29.853544   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853562   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853642   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.387973032s)
	I0723 13:58:29.853674   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853684   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853760   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.329048662s)
	I0723 13:58:29.853777   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.853793   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	W0723 13:58:29.853802   19502 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 13:58:29.853821   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.853833   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.853840   19502 retry.go:31] will retry after 174.923181ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 13:58:29.853844   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853865   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.853873   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853876   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.853927   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.853935   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.853946   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853967   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.853992   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.853999   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854006   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854013   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.854041   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.084651231s)
	I0723 13:58:29.854063   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854061   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854070   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854074   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.854079   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854085   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.853909   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854117   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854125   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.854133   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.854311   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.854341   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854352   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854360   19502 addons.go:475] Verifying addon registry=true in "addons-566823"
	I0723 13:58:29.854589   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854607   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854614   19502 addons.go:475] Verifying addon ingress=true in "addons-566823"
	I0723 13:58:29.854717   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854728   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.854900   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.854924   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.854932   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.855318   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.783057659s)
	I0723 13:58:29.855359   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.855369   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.855487   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.855518   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.855525   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.855540   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.855553   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.856106   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.856136   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856144   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856549   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856568   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856577   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.856596   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.856680   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856693   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856701   19502 addons.go:475] Verifying addon metrics-server=true in "addons-566823"
	I0723 13:58:29.856837   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.856864   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856870   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856905   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.856940   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.856951   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.856968   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.856975   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.857036   19502 out.go:177] * Verifying ingress addon...
	I0723 13:58:29.857064   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.857075   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.857084   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.857092   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.857186   19502 out.go:177] * Verifying registry addon...
	I0723 13:58:29.857344   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.857367   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.857799   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.858997   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:29.859019   19502 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-566823 service yakd-dashboard -n yakd-dashboard
	
	I0723 13:58:29.859024   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.859085   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.860000   19502 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0723 13:58:29.860148   19502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0723 13:58:29.891120   19502 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0723 13:58:29.891144   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:29.891218   19502 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0723 13:58:29.891237   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:29.912473   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:29.912492   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:29.912768   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:29.912783   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:29.912809   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:30.029357   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 13:58:30.365800   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:30.367052   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:30.867393   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:30.867935   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:31.123139   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.053819252s)
	I0723 13:58:31.123187   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.123195   19502 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.817516334s)
	I0723 13:58:31.123225   19502 api_server.go:72] duration metric: took 9.393919744s to wait for apiserver process to appear ...
	I0723 13:58:31.123236   19502 api_server.go:88] waiting for apiserver healthz status ...
	I0723 13:58:31.123238   19502 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.877955247s)
	I0723 13:58:31.123256   19502 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0723 13:58:31.123201   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.123737   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.123752   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.123756   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:31.123760   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.123849   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.124133   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.124149   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.124160   19502 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-566823"
	I0723 13:58:31.124866   19502 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 13:58:31.125760   19502 out.go:177] * Verifying csi-hostpath-driver addon...
	I0723 13:58:31.127664   19502 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0723 13:58:31.128318   19502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0723 13:58:31.129235   19502 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0723 13:58:31.129255   19502 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0723 13:58:31.137062   19502 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0723 13:58:31.144990   19502 api_server.go:141] control plane version: v1.30.3
	I0723 13:58:31.145023   19502 api_server.go:131] duration metric: took 21.779021ms to wait for apiserver health ...
	I0723 13:58:31.145033   19502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 13:58:31.166060   19502 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 13:58:31.166080   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:31.175064   19502 system_pods.go:59] 19 kube-system pods found
	I0723 13:58:31.175103   19502 system_pods.go:61] "coredns-7db6d8ff4d-4zjr6" [44af35b9-1b02-4ea2-ae0c-edc96976f89a] Running
	I0723 13:58:31.175109   19502 system_pods.go:61] "coredns-7db6d8ff4d-jhdm4" [fa9b7640-f730-448e-942f-44fd0788921e] Running
	I0723 13:58:31.175116   19502 system_pods.go:61] "csi-hostpath-attacher-0" [69259ffc-bf8b-4c26-bfa8-e06e26e990eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0723 13:58:31.175121   19502 system_pods.go:61] "csi-hostpath-resizer-0" [8af26a5d-3cc4-4627-b99f-49f1153b5fac] Pending
	I0723 13:58:31.175131   19502 system_pods.go:61] "csi-hostpathplugin-gnjgh" [0d878af2-8cec-4825-910d-8eb02e65b9ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0723 13:58:31.175153   19502 system_pods.go:61] "etcd-addons-566823" [009c1e05-4ba2-4525-bca9-2834a1b4a836] Running
	I0723 13:58:31.175161   19502 system_pods.go:61] "kube-apiserver-addons-566823" [f8a4a022-c913-4db5-ad61-304ee63f66a7] Running
	I0723 13:58:31.175166   19502 system_pods.go:61] "kube-controller-manager-addons-566823" [32f9ec49-5bb3-45f4-8f86-969feb94d86e] Running
	I0723 13:58:31.175174   19502 system_pods.go:61] "kube-ingress-dns-minikube" [03cc5ad6-8256-43b3-b473-93939d6d75cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0723 13:58:31.175181   19502 system_pods.go:61] "kube-proxy-dhm7l" [9cf78545-7300-4f1a-a947-7459b858880d] Running
	I0723 13:58:31.175185   19502 system_pods.go:61] "kube-scheduler-addons-566823" [6e151043-406d-40fb-bc07-f56affe614fa] Running
	I0723 13:58:31.175191   19502 system_pods.go:61] "metrics-server-c59844bb4-f52cd" [6b45f2b1-e48c-4097-aa53-5c2f5fea4806] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 13:58:31.175200   19502 system_pods.go:61] "nvidia-device-plugin-daemonset-ntcgv" [fa2530a9-7fcd-4a19-bde9-4a8e1607e1e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0723 13:58:31.175209   19502 system_pods.go:61] "registry-656c9c8d9c-4gvbc" [191b0c30-0add-4831-9cb0-de8b776cedc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0723 13:58:31.175215   19502 system_pods.go:61] "registry-proxy-4b47m" [02461034-b1da-43d3-8017-4b96ba1b9c2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0723 13:58:31.175224   19502 system_pods.go:61] "snapshot-controller-745499f584-hw5vj" [93d07ee6-b8df-4528-9996-a505db12639b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.175237   19502 system_pods.go:61] "snapshot-controller-745499f584-r8tcx" [cd8e271a-0a4e-4404-afdc-402eb6bd57ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.175243   19502 system_pods.go:61] "storage-provisioner" [bd28f68d-bdb2-47cf-8029-1043b5280270] Running
	I0723 13:58:31.175255   19502 system_pods.go:61] "tiller-deploy-6677d64bcd-598dj" [98da9631-ad0b-4406-b5c6-c709e679ab9d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0723 13:58:31.175271   19502 system_pods.go:74] duration metric: took 30.23144ms to wait for pod list to return data ...
	I0723 13:58:31.175287   19502 default_sa.go:34] waiting for default service account to be created ...
	I0723 13:58:31.189011   19502 default_sa.go:45] found service account: "default"
	I0723 13:58:31.189038   19502 default_sa.go:55] duration metric: took 13.741176ms for default service account to be created ...
	I0723 13:58:31.189051   19502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 13:58:31.206724   19502 system_pods.go:86] 19 kube-system pods found
	I0723 13:58:31.206749   19502 system_pods.go:89] "coredns-7db6d8ff4d-4zjr6" [44af35b9-1b02-4ea2-ae0c-edc96976f89a] Running
	I0723 13:58:31.206755   19502 system_pods.go:89] "coredns-7db6d8ff4d-jhdm4" [fa9b7640-f730-448e-942f-44fd0788921e] Running
	I0723 13:58:31.206762   19502 system_pods.go:89] "csi-hostpath-attacher-0" [69259ffc-bf8b-4c26-bfa8-e06e26e990eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0723 13:58:31.206769   19502 system_pods.go:89] "csi-hostpath-resizer-0" [8af26a5d-3cc4-4627-b99f-49f1153b5fac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0723 13:58:31.206777   19502 system_pods.go:89] "csi-hostpathplugin-gnjgh" [0d878af2-8cec-4825-910d-8eb02e65b9ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0723 13:58:31.206782   19502 system_pods.go:89] "etcd-addons-566823" [009c1e05-4ba2-4525-bca9-2834a1b4a836] Running
	I0723 13:58:31.206787   19502 system_pods.go:89] "kube-apiserver-addons-566823" [f8a4a022-c913-4db5-ad61-304ee63f66a7] Running
	I0723 13:58:31.206791   19502 system_pods.go:89] "kube-controller-manager-addons-566823" [32f9ec49-5bb3-45f4-8f86-969feb94d86e] Running
	I0723 13:58:31.206799   19502 system_pods.go:89] "kube-ingress-dns-minikube" [03cc5ad6-8256-43b3-b473-93939d6d75cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0723 13:58:31.206805   19502 system_pods.go:89] "kube-proxy-dhm7l" [9cf78545-7300-4f1a-a947-7459b858880d] Running
	I0723 13:58:31.206810   19502 system_pods.go:89] "kube-scheduler-addons-566823" [6e151043-406d-40fb-bc07-f56affe614fa] Running
	I0723 13:58:31.206817   19502 system_pods.go:89] "metrics-server-c59844bb4-f52cd" [6b45f2b1-e48c-4097-aa53-5c2f5fea4806] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 13:58:31.206823   19502 system_pods.go:89] "nvidia-device-plugin-daemonset-ntcgv" [fa2530a9-7fcd-4a19-bde9-4a8e1607e1e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0723 13:58:31.206831   19502 system_pods.go:89] "registry-656c9c8d9c-4gvbc" [191b0c30-0add-4831-9cb0-de8b776cedc3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0723 13:58:31.206839   19502 system_pods.go:89] "registry-proxy-4b47m" [02461034-b1da-43d3-8017-4b96ba1b9c2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0723 13:58:31.206848   19502 system_pods.go:89] "snapshot-controller-745499f584-hw5vj" [93d07ee6-b8df-4528-9996-a505db12639b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.206857   19502 system_pods.go:89] "snapshot-controller-745499f584-r8tcx" [cd8e271a-0a4e-4404-afdc-402eb6bd57ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0723 13:58:31.206863   19502 system_pods.go:89] "storage-provisioner" [bd28f68d-bdb2-47cf-8029-1043b5280270] Running
	I0723 13:58:31.206869   19502 system_pods.go:89] "tiller-deploy-6677d64bcd-598dj" [98da9631-ad0b-4406-b5c6-c709e679ab9d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0723 13:58:31.206878   19502 system_pods.go:126] duration metric: took 17.819593ms to wait for k8s-apps to be running ...
	I0723 13:58:31.206888   19502 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 13:58:31.206929   19502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 13:58:31.244583   19502 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0723 13:58:31.244612   19502 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0723 13:58:31.306864   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.27745784s)
	I0723 13:58:31.306907   19502 system_svc.go:56] duration metric: took 100.010856ms WaitForService to wait for kubelet
	I0723 13:58:31.306927   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.306943   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.306935   19502 kubeadm.go:582] duration metric: took 9.577629013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 13:58:31.306965   19502 node_conditions.go:102] verifying NodePressure condition ...
	I0723 13:58:31.307294   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.307313   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.307332   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:31.307344   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:31.307576   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:31.307617   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:31.307629   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:31.310204   19502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 13:58:31.310234   19502 node_conditions.go:123] node cpu capacity is 2
	I0723 13:58:31.310248   19502 node_conditions.go:105] duration metric: took 3.276395ms to run NodePressure ...
	I0723 13:58:31.310260   19502 start.go:241] waiting for startup goroutines ...
	I0723 13:58:31.329454   19502 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 13:58:31.329472   19502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0723 13:58:31.367326   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:31.368802   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:31.378984   19502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 13:58:31.634056   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:31.865237   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:31.866021   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:32.132939   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:32.388766   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:32.388872   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:32.494534   19502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.115508523s)
	I0723 13:58:32.494592   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:32.494609   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:32.494886   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:32.494905   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:32.494913   19502 main.go:141] libmachine: Making call to close driver server
	I0723 13:58:32.494923   19502 main.go:141] libmachine: (addons-566823) Calling .Close
	I0723 13:58:32.494936   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:32.495131   19502 main.go:141] libmachine: Successfully made call to close driver server
	I0723 13:58:32.495186   19502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 13:58:32.495202   19502 main.go:141] libmachine: (addons-566823) DBG | Closing plugin on server side
	I0723 13:58:32.496910   19502 addons.go:475] Verifying addon gcp-auth=true in "addons-566823"
	I0723 13:58:32.498867   19502 out.go:177] * Verifying gcp-auth addon...
	I0723 13:58:32.501027   19502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0723 13:58:32.520469   19502 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0723 13:58:32.520497   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:32.659478   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:32.865972   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:32.866095   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:33.006836   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:33.134100   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:33.370457   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:33.370928   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:33.506041   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:33.634493   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:33.868723   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:33.868930   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:34.005258   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:34.134103   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:34.365342   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:34.366990   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:34.505036   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:34.634624   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:34.864780   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:34.865202   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:35.005694   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:35.133706   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:35.365032   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:35.366599   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:35.506125   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:35.633733   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:35.864869   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:35.865243   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:36.005346   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:36.134524   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:36.589479   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:36.590482   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:36.590776   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:36.634932   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:36.865903   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:36.866132   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:37.008060   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:37.134155   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:37.428556   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:37.429560   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:37.505224   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:37.633961   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:37.865381   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:37.866342   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:38.005067   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:38.133822   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:38.364857   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:38.366568   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:38.506228   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:38.633319   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:39.040865   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:39.041089   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:39.044975   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:39.133910   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:39.365010   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:39.365087   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:39.510776   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:39.634312   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:39.864769   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:39.865657   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:40.004562   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:40.135266   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:40.364798   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:40.364964   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:40.505729   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:40.633852   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:40.864930   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:40.865543   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:41.004956   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:41.133488   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:41.365168   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:41.365220   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:41.506309   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:41.634466   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:41.866993   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:41.867127   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:42.005414   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:42.134202   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:42.365129   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:42.365135   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:42.506148   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:42.633941   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:42.864162   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:42.864303   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:43.005502   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:43.134807   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:43.365907   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:43.366162   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:43.505185   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:43.635747   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:43.863952   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:43.865544   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:44.005336   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:44.134225   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:44.365026   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:44.366707   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:44.504677   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:44.634555   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:44.864959   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:44.866361   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:45.012977   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:45.133916   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:45.365518   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:45.365795   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:45.504404   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:45.634271   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:45.865279   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:45.865688   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:46.004860   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:46.133651   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:46.365572   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:46.365874   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:46.506368   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:46.634669   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:46.866161   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:46.866661   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:47.005055   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:47.134519   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:47.369481   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:47.371941   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:47.505034   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:47.635357   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:47.864812   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:47.865218   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:48.011214   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:48.134098   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:48.364180   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:48.365207   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:48.505648   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:48.633420   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:48.865240   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:48.867566   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:49.004608   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:49.134532   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:49.365342   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:49.365444   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:49.505372   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:49.636738   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:49.901866   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:49.902039   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:50.005376   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:50.134244   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:50.365604   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:50.368275   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:50.505613   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:50.633852   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:50.864338   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:50.865595   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:51.004958   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:51.133930   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:51.365353   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:51.365758   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:51.505822   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:51.634550   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:51.863853   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:51.865464   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:52.004724   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:52.133989   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:52.364893   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:52.366262   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:52.505747   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:52.633350   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:52.864563   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:52.867114   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:53.005625   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:53.134265   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:53.365146   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:53.365940   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:53.505385   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:53.634547   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:53.865082   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:53.865669   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:54.005116   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:54.133562   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:54.366115   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:54.368343   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:54.504045   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:54.633927   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:54.867402   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:54.867627   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:55.005110   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:55.134107   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:55.368168   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:55.369936   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:55.504795   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:55.633423   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:55.864906   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:55.865019   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:56.005251   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:56.134082   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:56.365073   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:56.365625   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:56.505758   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:56.634042   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:56.864135   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:56.864181   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:57.004505   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:57.135089   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:57.368997   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:57.369789   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:57.505130   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:57.633955   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:57.866329   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:57.866525   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:58.005056   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:58.134054   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:58.365155   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:58.368063   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:58.504959   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:58.634133   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:58.866299   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:58.866300   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:59.005040   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:59.133961   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:59.365122   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:59.365485   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:58:59.574645   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:58:59.634419   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:58:59.866537   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:58:59.866909   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:00.004281   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:00.136261   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:00.366991   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:00.367672   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:00.505823   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:00.642289   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:00.864816   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:00.867919   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:01.008626   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:01.134062   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:01.366213   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:01.366751   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:01.505397   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:01.636186   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:01.864487   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:01.866324   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:02.005666   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:02.134741   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:02.366733   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:02.368375   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:02.505103   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:02.648636   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:02.867244   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:02.867428   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:03.009148   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:03.134123   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:03.364760   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:03.366193   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:03.505617   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:03.633999   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:03.864590   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:03.864910   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:04.004882   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:04.134097   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:04.364739   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:04.364806   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:04.505568   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:04.633263   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:04.866520   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:04.866877   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:05.005195   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:05.134266   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:05.365826   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:05.366029   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:05.504567   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:05.633254   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:05.864748   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:05.868327   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:06.005315   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:06.134267   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:06.364402   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:06.364958   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:06.505403   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:06.721052   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:07.071523   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:07.071827   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:07.072491   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:07.133709   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:07.364693   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:07.364979   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:07.504899   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:07.635024   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:07.887523   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:07.888231   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:08.005670   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:08.134160   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:08.366564   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:08.366711   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:08.504858   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:08.633698   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:08.864373   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:08.864811   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:09.005319   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:09.134102   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:09.365052   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:09.365931   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:09.505160   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:09.635735   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:09.864409   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:09.864859   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:10.005379   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:10.137590   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:10.365481   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:10.366626   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:10.507309   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:10.634017   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:10.866521   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:10.872107   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:11.005333   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:11.134256   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:11.364951   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:11.365479   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:11.509190   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:11.634144   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:11.865091   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:11.865689   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:12.005089   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:12.133615   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:12.366042   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:12.366716   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:12.505189   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:12.634216   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:12.865076   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:12.866370   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:13.009333   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:13.134814   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:13.364678   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:13.365714   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:13.505874   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:13.634194   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:13.866119   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:13.866956   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:14.004446   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:14.134056   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:14.365092   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:14.366629   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:14.505155   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:14.634412   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:14.864186   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:14.864648   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 13:59:15.006185   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:15.140078   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:15.365616   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:15.367152   19502 kapi.go:107] duration metric: took 45.507001664s to wait for kubernetes.io/minikube-addons=registry ...
	I0723 13:59:15.506531   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:15.634956   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:15.864278   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:16.005256   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:16.134826   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:16.364629   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:16.504599   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:16.633522   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:16.865502   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:17.005372   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:17.136528   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:17.365232   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:17.505777   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:17.633476   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:17.865076   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:18.006520   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:18.134546   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:18.380631   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:18.621103   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:18.633718   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:18.864170   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:19.004914   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:19.137097   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:19.364110   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:19.505129   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:19.634016   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:19.867361   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:20.007489   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:20.134449   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:20.365034   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:20.504397   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:20.634139   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:20.864022   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:21.004098   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:21.133656   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:21.364880   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:21.509704   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:21.633240   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:21.864114   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:22.004951   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:22.133486   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:22.364721   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:22.507347   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:22.634252   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:22.864760   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:23.004490   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:23.133471   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:23.365026   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:23.507826   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:23.634170   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:24.185146   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:24.186664   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:24.186811   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:24.364979   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:24.504984   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:24.633987   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:24.864394   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:25.005547   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:25.134567   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:25.364647   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:25.506667   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:25.633469   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:25.865518   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:26.005595   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:26.135663   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:26.368712   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:26.506341   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:26.634820   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:26.863607   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:27.004109   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:27.134283   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:27.375890   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:27.512006   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:27.634754   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:27.865250   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:28.004892   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:28.133752   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:28.363768   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:28.504162   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:28.633753   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:28.864882   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:29.004658   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:29.134183   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:29.364911   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:29.511409   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:29.634544   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:29.864506   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:30.005225   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:30.133944   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:30.364834   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:30.511024   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:30.638337   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:30.869754   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:31.007236   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:31.135766   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:31.364185   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:31.505660   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:31.633679   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:31.993360   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:32.006154   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:32.134859   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:32.364344   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:32.504976   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:32.634076   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:32.864566   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:33.004674   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:33.133085   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:33.364094   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:33.505119   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:33.634542   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:33.865956   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:34.004619   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:34.134564   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:34.364734   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:34.504619   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:34.634779   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:34.864253   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:35.008601   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:35.135009   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:35.364952   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:35.504846   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:35.634049   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:35.864092   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:36.006022   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:36.134218   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:36.364762   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:36.505840   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:36.633880   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:36.865020   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:37.005035   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:37.133818   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:37.364293   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:37.504573   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:37.635633   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:37.864923   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:38.005492   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:38.134344   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:38.571143   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:38.571997   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:38.765989   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:38.869482   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:39.004665   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:39.134033   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:39.364542   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:39.508407   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:39.635034   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:39.863975   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:40.005536   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:40.133130   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:40.364230   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:40.515202   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:40.638359   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:41.256677   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:41.257363   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:41.257353   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:41.364831   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:41.507674   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:41.633287   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:41.864568   19502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 13:59:42.004874   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:42.133353   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:42.364498   19502 kapi.go:107] duration metric: took 1m12.504496508s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0723 13:59:42.505795   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:42.636029   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:43.006053   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:43.137737   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:43.504915   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:43.634504   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:44.004073   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:44.134172   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:44.505043   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:44.634472   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:45.005215   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:45.134315   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:45.506210   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:45.634287   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:46.005325   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:46.134183   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:46.506967   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:46.634898   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:47.004497   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 13:59:47.151519   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:47.505439   19502 kapi.go:107] duration metric: took 1m15.004410647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0723 13:59:47.507246   19502 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-566823 cluster.
	I0723 13:59:47.508643   19502 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0723 13:59:47.510029   19502 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0723 13:59:47.633920   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:48.346054   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:48.633904   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:49.135686   19502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 13:59:49.634018   19502 kapi.go:107] duration metric: took 1m18.505696308s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0723 13:59:49.635779   19502 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner-rancher, nvidia-device-plugin, helm-tiller, metrics-server, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0723 13:59:49.637140   19502 addons.go:510] duration metric: took 1m27.907849923s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner-rancher nvidia-device-plugin helm-tiller metrics-server storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0723 13:59:49.637180   19502 start.go:246] waiting for cluster config update ...
	I0723 13:59:49.637200   19502 start.go:255] writing updated cluster config ...
	I0723 13:59:49.637447   19502 ssh_runner.go:195] Run: rm -f paused
	I0723 13:59:49.688312   19502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 13:59:49.690300   19502 out.go:177] * Done! kubectl is now configured to use "addons-566823" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.580041201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743575579968953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34397cc5-b58b-462e-b6b6-36756076b850 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.580686950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d780ec3f-66c6-4e60-85d3-a2fe7fee4d9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.580748303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d780ec3f-66c6-4e60-85d3-a2fe7fee4d9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.581111177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743
147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandb
oxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a950
1832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad32f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e
9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,PodSandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883c
df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c72182f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d780ec3f-66c6-4e60-85d3-a2fe7fee4d9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.619536360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f984ba4-5a62-4aef-9f68-eb995e290c7c name=/runtime.v1.RuntimeService/Version
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.619610725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f984ba4-5a62-4aef-9f68-eb995e290c7c name=/runtime.v1.RuntimeService/Version
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.626224765Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1d139b9-d9b6-400d-bf95-ab10b47dfbf9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.627687663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743575627654351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1d139b9-d9b6-400d-bf95-ab10b47dfbf9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.628710922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ae06823-8997-4750-a18d-c904efd09f1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.628849965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ae06823-8997-4750-a18d-c904efd09f1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.629490601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743
147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandb
oxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a950
1832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad32f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e
9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,PodSandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883c
df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c72182f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ae06823-8997-4750-a18d-c904efd09f1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.664344618Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27073d46-8018-42d3-880d-97e0fba39a5f name=/runtime.v1.RuntimeService/Version
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.664559009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27073d46-8018-42d3-880d-97e0fba39a5f name=/runtime.v1.RuntimeService/Version
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.666123671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f806e318-c1e3-4f04-a66b-809468e9f53b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.667472022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743575667441949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f806e318-c1e3-4f04-a66b-809468e9f53b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.668182239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4aabe50-d564-4917-ae54-424495b19629 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.668253973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4aabe50-d564-4917-ae54-424495b19629 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.668564418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743
147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandb
oxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a950
1832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad32f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e
9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,PodSandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883c
df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c72182f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4aabe50-d564-4917-ae54-424495b19629 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.705307332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce0a4e4f-9c59-46d4-9140-ab4674a49e6e name=/runtime.v1.RuntimeService/Version
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.705399129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce0a4e4f-9c59-46d4-9140-ab4674a49e6e name=/runtime.v1.RuntimeService/Version
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.706405965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06da8be6-7003-4482-a40b-67c52af1b6ae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.707640179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721743575707610863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580614,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06da8be6-7003-4482-a40b-67c52af1b6ae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.708213472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36c377dd-77bd-42d1-99be-0a1fb5e22f1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.708277000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36c377dd-77bd-42d1-99be-0a1fb5e22f1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:06:15 addons-566823 crio[678]: time="2024-07-23 14:06:15.708555974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b7c3a17efde74ef5cb9f1f1cb2c72d38610850f72ec219454d13c1590b889df,PodSandboxId:dbe642c6a83a7aadac1b573aaf131e59b42d1931ae15c68f9d94c2ab0f236d00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721743366246480326,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-d7gff,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15be02a5-428d-42b8-9e65-a3be389fac3e,},Annotations:map[string]string{io.kubernetes.container.hash: b40a6d45,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb4666a476ba16a3b01d9a03a1521f6f76b70bf0064f07af49c1a858e93295a,PodSandboxId:711231b4bab314ba1331ef18d790f98d9a36db6bc5994d99a28d2866600143bd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721743225162503061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df881e74-ce15-47aa-8763-8ee63ffc74ae,},Annotations:map[string]string{io.kubernet
es.container.hash: fdfb788c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ee64f7ad5dc9c0fe905d5aee7ee7691e4a8dbab806cf7e0b3d606f81377f55,PodSandboxId:a817ed4049510fff2dac75bac7ff3a587ce9ddcad8df728aae655450d510f25d,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721743196347738021,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-f4tf7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 1198ab14-ccfe-4434-9074-5b62d0a63857,},Annotations:map[string]string{io.kubernetes.container.hash: 809aea1e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008,PodSandboxId:810e5b26ca9647bead40959305a9e93d1a52482a5d5eadbd18be9e6b91b71c67,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721743186315435839,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-xvhbw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 609f955a-77eb-438b-a2ab-0cd9de30daea,},Annotations:map[string]string{io.kubernetes.container.hash: b0a60846,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:619d0d2ad819bcea7b446c4b87725c43f9fb114898d60986de42fa49fb6fbace,PodSandboxId:d4f99f03b17e5991f23fd71745cb9f7be992e63811624250560e258bc60fe705,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721743
147819990716,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-k4b7n,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 51963bc7-84ef-4889-b876-8ef334e75508,},Annotations:map[string]string{io.kubernetes.container.hash: 5c03e32a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec,PodSandboxId:8903d6b2ee136ee48542d7714ff386c8986614fd7efb8389d362490f855d0071,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721743142649544785,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f52cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b45f2b1-e48c-4097-aa53-5c2f5fea4806,},Annotations:map[string]string{io.kubernetes.container.hash: a2cc4088,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff,PodSandboxId:b74f90d4c79026da584258709b55a96bd6395134185514ba895ab8b6a50c04c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721743108151345816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd28f68d-bdb2-47cf-8029-1043b5280270,},Annotations:map[string]string{io.kubernetes.container.hash: 501b2150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117,PodSandboxId:c0b77f115c3b4e698379d5c3d9a89fc1c438996403b0fd2e78f3baeb7e377303,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721743106264785504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zjr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44af35b9-1b02-4ea2-ae0c-edc96976f89a,},Annotations:map[string]string{io.kubernetes.container.hash: 28ad5997,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92,PodSandb
oxId:f9507dae3059da2252c5ff81ac602d79e874ab3596a82965e7ad9b50250789d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721743102414228156,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhm7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf78545-7300-4f1a-a947-7459b858880d,},Annotations:map[string]string{io.kubernetes.container.hash: 8cc035c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc85cfb34a42b2d7f7a7917a3bafb4dd99aa24543951201740915568b3c687e9,PodSandboxId:f6ca437600187d8c0975ce84a950
1832e9bf6c97caebf03c8589912375cb82cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721743082710778075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c4991817c80221df8122c97be142fac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f169597,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad32f3f45094e9c1,PodSandboxId:80f1b76c4adfbea78e9d5444bc6c427b40e1ef360e75e
9bee6a7f8b742b35535,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721743082687170649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00ef8a0b6566cd313737784fddd8c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce,PodSandboxId:db4bd96561f899625af142422e2f337db3b59b8b1b4d4d9b9d8ac5d5e9883c
df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721743082632631779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d93b7a248f4ed30ac528fabeb2a41fd3,},Annotations:map[string]string{io.kubernetes.container.hash: 99a07cc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096,PodSandboxId:58ae74b9e42d837161d77264590e0fdb3c72182f25e545506b12156d3741b6ec,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721743082632886522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-566823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85fb4b346d8e9b59761bdc715c24a074,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36c377dd-77bd-42d1-99be-0a1fb5e22f1c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b7c3a17efde7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   dbe642c6a83a7       hello-world-app-6778b5fc9f-d7gff
	5eb4666a476ba       docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e                         5 minutes ago       Running             nginx                     0                   711231b4bab31       nginx
	83ee64f7ad5dc       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   6 minutes ago       Running             headlamp                  0                   a817ed4049510       headlamp-7867546754-f4tf7
	a6d043106634e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   810e5b26ca964       gcp-auth-5db96cd9b4-xvhbw
	619d0d2ad819b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         7 minutes ago       Running             yakd                      0                   d4f99f03b17e5       yakd-dashboard-799879c74f-k4b7n
	c618693edcba7       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   8903d6b2ee136       metrics-server-c59844bb4-f52cd
	16b95fc22c542       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   b74f90d4c7902       storage-provisioner
	fe0ead5b9ae19       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   c0b77f115c3b4       coredns-7db6d8ff4d-4zjr6
	75690199f376c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   f9507dae3059d       kube-proxy-dhm7l
	cc85cfb34a42b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   f6ca437600187       kube-apiserver-addons-566823
	395cd38ab3a5c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   80f1b76c4adfb       kube-scheduler-addons-566823
	455c6f2b15566       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   58ae74b9e42d8       kube-controller-manager-addons-566823
	e9471edc6aed8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   db4bd96561f89       etcd-addons-566823
	
	
	==> coredns [fe0ead5b9ae1965f914835a35cf3915d2746165e63c5e513f3e203d56820e117] <==
	[INFO] 10.244.0.7:41909 - 59287 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000591904s
	[INFO] 10.244.0.7:40802 - 50087 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086791s
	[INFO] 10.244.0.7:40802 - 46265 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068983s
	[INFO] 10.244.0.7:56840 - 13086 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060242s
	[INFO] 10.244.0.7:56840 - 49688 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000119346s
	[INFO] 10.244.0.7:33942 - 29435 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093629s
	[INFO] 10.244.0.7:33942 - 11253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000047752s
	[INFO] 10.244.0.7:59691 - 44171 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000099337s
	[INFO] 10.244.0.7:59691 - 2438 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003053s
	[INFO] 10.244.0.7:46306 - 11320 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030244s
	[INFO] 10.244.0.7:46306 - 13882 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072909s
	[INFO] 10.244.0.7:38834 - 28302 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027184s
	[INFO] 10.244.0.7:38834 - 29580 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054539s
	[INFO] 10.244.0.7:56251 - 16677 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033172s
	[INFO] 10.244.0.7:56251 - 53031 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000034123s
	[INFO] 10.244.0.22:58715 - 35388 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000398507s
	[INFO] 10.244.0.22:57233 - 35449 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072661s
	[INFO] 10.244.0.22:45036 - 6393 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082871s
	[INFO] 10.244.0.22:49055 - 29052 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000818041s
	[INFO] 10.244.0.22:46490 - 40988 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000387496s
	[INFO] 10.244.0.22:60739 - 46301 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000054974s
	[INFO] 10.244.0.22:40801 - 21898 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000995342s
	[INFO] 10.244.0.22:56403 - 56910 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000502122s
	[INFO] 10.244.0.25:38501 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000397183s
	[INFO] 10.244.0.25:52897 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175644s
	
	
	==> describe nodes <==
	Name:               addons-566823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-566823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=addons-566823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T13_58_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-566823
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 13:58:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-566823
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:06:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:03:13 +0000   Tue, 23 Jul 2024 13:58:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:03:13 +0000   Tue, 23 Jul 2024 13:58:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:03:13 +0000   Tue, 23 Jul 2024 13:58:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:03:13 +0000   Tue, 23 Jul 2024 13:58:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    addons-566823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 29339ff5f01d4e0484eccd5ff044a154
	  System UUID:                29339ff5-f01d-4e04-84ec-cd5ff044a154
	  Boot ID:                    3dc7844a-05b8-4110-a26d-f3272538bc6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-d7gff         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  gcp-auth                    gcp-auth-5db96cd9b4-xvhbw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	  headlamp                    headlamp-7867546754-f4tf7                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 coredns-7db6d8ff4d-4zjr6                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m55s
	  kube-system                 etcd-addons-566823                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-apiserver-addons-566823             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-controller-manager-addons-566823    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-proxy-dhm7l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-scheduler-addons-566823             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 metrics-server-c59844bb4-f52cd           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m49s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  yakd-dashboard              yakd-dashboard-799879c74f-k4b7n          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m52s                kube-proxy       
	  Normal  Starting                 8m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m9s (x2 over 8m9s)  kubelet          Node addons-566823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m9s (x2 over 8m9s)  kubelet          Node addons-566823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m9s (x2 over 8m9s)  kubelet          Node addons-566823 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m8s                 kubelet          Node addons-566823 status is now: NodeReady
	  Normal  RegisteredNode           7m56s                node-controller  Node addons-566823 event: Registered Node addons-566823 in Controller
	
	
	==> dmesg <==
	[  +5.109997] kauditd_printk_skb: 123 callbacks suppressed
	[  +5.246645] kauditd_printk_skb: 165 callbacks suppressed
	[  +6.825625] kauditd_printk_skb: 36 callbacks suppressed
	[ +16.407855] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.056172] kauditd_printk_skb: 14 callbacks suppressed
	[Jul23 13:59] kauditd_printk_skb: 13 callbacks suppressed
	[ +11.955748] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.201226] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.450815] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.470618] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.130135] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.097158] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.884630] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.912545] kauditd_printk_skb: 15 callbacks suppressed
	[Jul23 14:00] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.381272] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.801597] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.038707] kauditd_printk_skb: 36 callbacks suppressed
	[ +21.305403] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.306249] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.421313] kauditd_printk_skb: 3 callbacks suppressed
	[Jul23 14:01] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.319621] kauditd_printk_skb: 33 callbacks suppressed
	[Jul23 14:02] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.001702] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [e9471edc6aed82ee81783a1ddd70f985af540cd15a726cea178398eb56e35bce] <==
	{"level":"warn","ts":"2024-07-23T13:59:48.330536Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:47.935652Z","time spent":"394.765159ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hqxlxdx7ypjegrayddcaqhf55u\" mod_revision:1115 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hqxlxdx7ypjegrayddcaqhf55u\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hqxlxdx7ypjegrayddcaqhf55u\" > >"}
	{"level":"warn","ts":"2024-07-23T13:59:48.330769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"328.410134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T13:59:48.330825Z","caller":"traceutil/trace.go:171","msg":"trace[1486851517] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1164; }","duration":"328.490434ms","start":"2024-07-23T13:59:48.002324Z","end":"2024-07-23T13:59:48.330814Z","steps":["trace[1486851517] 'agreement among raft nodes before linearized reading'  (duration: 328.361571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:48.33085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:48.002311Z","time spent":"328.533717ms","remote":"127.0.0.1:60818","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-23T13:59:48.331104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.372845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-23T13:59:48.331235Z","caller":"traceutil/trace.go:171","msg":"trace[479918349] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:1164; }","duration":"300.486059ms","start":"2024-07-23T13:59:48.030668Z","end":"2024-07-23T13:59:48.331154Z","steps":["trace[479918349] 'agreement among raft nodes before linearized reading'  (duration: 300.121192ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:48.331285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:48.030654Z","time spent":"300.620481ms","remote":"127.0.0.1:50090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":10,"response size":29,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true "}
	{"level":"warn","ts":"2024-07-23T13:59:48.331383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.189743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-07-23T13:59:48.331425Z","caller":"traceutil/trace.go:171","msg":"trace[1054619235] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1164; }","duration":"212.251184ms","start":"2024-07-23T13:59:48.119166Z","end":"2024-07-23T13:59:48.331417Z","steps":["trace[1054619235] 'agreement among raft nodes before linearized reading'  (duration: 212.097316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:48.331246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.966283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-f52cd.17e4dc4466c8be25\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-07-23T13:59:48.331537Z","caller":"traceutil/trace.go:171","msg":"trace[723786253] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-f52cd.17e4dc4466c8be25; range_end:; response_count:1; response_revision:1164; }","duration":"117.285124ms","start":"2024-07-23T13:59:48.214243Z","end":"2024-07-23T13:59:48.331528Z","steps":["trace[723786253] 'agreement among raft nodes before linearized reading'  (duration: 116.923956ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T13:59:55.366357Z","caller":"traceutil/trace.go:171","msg":"trace[1043766184] linearizableReadLoop","detail":"{readStateIndex:1263; appliedIndex:1262; }","duration":"384.731123ms","start":"2024-07-23T13:59:54.981608Z","end":"2024-07-23T13:59:55.366339Z","steps":["trace[1043766184] 'read index received'  (duration: 384.546657ms)","trace[1043766184] 'applied index is now lower than readState.Index'  (duration: 183.791µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T13:59:55.366541Z","caller":"traceutil/trace.go:171","msg":"trace[1808763166] transaction","detail":"{read_only:false; response_revision:1226; number_of_response:1; }","duration":"385.1786ms","start":"2024-07-23T13:59:54.981349Z","end":"2024-07-23T13:59:55.366528Z","steps":["trace[1808763166] 'process raft request'  (duration: 384.877219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:55.367124Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:54.981333Z","time spent":"385.731876ms","remote":"127.0.0.1:50068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-bsfbc.17e4dc4b0c805b2e\" mod_revision:1216 > success:<request_put:<key:\"/registry/events/gadget/gadget-bsfbc.17e4dc4b0c805b2e\" value_size:693 lease:3156619680085819913 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-bsfbc.17e4dc4b0c805b2e\" > >"}
	{"level":"warn","ts":"2024-07-23T13:59:55.366642Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.011965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-23T13:59:55.367775Z","caller":"traceutil/trace.go:171","msg":"trace[806612147] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1226; }","duration":"386.173308ms","start":"2024-07-23T13:59:54.98159Z","end":"2024-07-23T13:59:55.367763Z","steps":["trace[806612147] 'agreement among raft nodes before linearized reading'  (duration: 384.974735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:55.368324Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:54.981582Z","time spent":"386.725562ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"info","ts":"2024-07-23T13:59:59.770911Z","caller":"traceutil/trace.go:171","msg":"trace[628963373] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"376.433359ms","start":"2024-07-23T13:59:59.394462Z","end":"2024-07-23T13:59:59.770895Z","steps":["trace[628963373] 'process raft request'  (duration: 376.213233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T13:59:59.77206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T13:59:59.394445Z","time spent":"377.38382ms","remote":"127.0.0.1:50168","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1248 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-23T14:00:50.78706Z","caller":"traceutil/trace.go:171","msg":"trace[411388456] linearizableReadLoop","detail":"{readStateIndex:1605; appliedIndex:1604; }","duration":"302.171606ms","start":"2024-07-23T14:00:50.484806Z","end":"2024-07-23T14:00:50.786977Z","steps":["trace[411388456] 'read index received'  (duration: 302.026289ms)","trace[411388456] 'applied index is now lower than readState.Index'  (duration: 144.832µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:00:50.787334Z","caller":"traceutil/trace.go:171","msg":"trace[1487591093] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"341.71765ms","start":"2024-07-23T14:00:50.445599Z","end":"2024-07-23T14:00:50.787317Z","steps":["trace[1487591093] 'process raft request'  (duration: 341.276228ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:00:50.787464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:00:50.445584Z","time spent":"341.792142ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1537 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-07-23T14:00:50.787727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.916272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-23T14:00:50.787773Z","caller":"traceutil/trace.go:171","msg":"trace[1278567850] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1551; }","duration":"302.986535ms","start":"2024-07-23T14:00:50.484778Z","end":"2024-07-23T14:00:50.787765Z","steps":["trace[1278567850] 'agreement among raft nodes before linearized reading'  (duration: 302.881592ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:00:50.787808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:00:50.484765Z","time spent":"303.037249ms","remote":"127.0.0.1:50262","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	
	
	==> gcp-auth [a6d043106634ec12260de0a6245ed8560f3e9985dd3bb3e3df54976f8fa22008] <==
	2024/07/23 13:59:46 GCP Auth Webhook started!
	2024/07/23 13:59:50 Ready to marshal response ...
	2024/07/23 13:59:50 Ready to write response ...
	2024/07/23 13:59:50 Ready to marshal response ...
	2024/07/23 13:59:50 Ready to write response ...
	2024/07/23 13:59:50 Ready to marshal response ...
	2024/07/23 13:59:50 Ready to write response ...
	2024/07/23 13:59:55 Ready to marshal response ...
	2024/07/23 13:59:55 Ready to write response ...
	2024/07/23 14:00:01 Ready to marshal response ...
	2024/07/23 14:00:01 Ready to write response ...
	2024/07/23 14:00:08 Ready to marshal response ...
	2024/07/23 14:00:08 Ready to write response ...
	2024/07/23 14:00:08 Ready to marshal response ...
	2024/07/23 14:00:08 Ready to write response ...
	2024/07/23 14:00:19 Ready to marshal response ...
	2024/07/23 14:00:19 Ready to write response ...
	2024/07/23 14:00:20 Ready to marshal response ...
	2024/07/23 14:00:20 Ready to write response ...
	2024/07/23 14:00:43 Ready to marshal response ...
	2024/07/23 14:00:43 Ready to write response ...
	2024/07/23 14:01:18 Ready to marshal response ...
	2024/07/23 14:01:18 Ready to write response ...
	2024/07/23 14:02:43 Ready to marshal response ...
	2024/07/23 14:02:43 Ready to write response ...
	
	
	==> kernel <==
	 14:06:16 up 8 min,  0 users,  load average: 1.22, 0.90, 0.58
	Linux addons-566823 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cc85cfb34a42b2d7f7a7917a3bafb4dd99aa24543951201740915568b3c687e9] <==
	W0723 14:00:09.269355       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 14:00:09.269410       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 14:00:09.270532       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 14:00:09.632890       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0723 14:00:15.007887       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0723 14:00:16.045373       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0723 14:00:20.736659       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0723 14:00:20.954345       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.5.113"}
	E0723 14:00:35.931876       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0723 14:00:57.114683       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0723 14:01:34.514799       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.514855       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.537327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.537568       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.566982       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.567191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.624482       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.624536       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:01:34.677035       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:01:34.677085       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0723 14:01:35.567239       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0723 14:01:35.677639       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0723 14:01:35.686208       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0723 14:02:43.501152       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.78.209"}
	
	
	==> kube-controller-manager [455c6f2b1556691f39fe82eefb04bb08d32a05fcdc37f803c560b3bc94d52096] <==
	W0723 14:03:56.148168       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:03:56.148282       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:04:27.276353       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:04:27.276587       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:04:32.947300       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:04:32.947372       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:04:48.302928       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:04:48.302984       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:04:49.069260       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:04:49.069364       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:05:05.463600       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:05:05.463695       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:05:24.636581       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:05:24.636640       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:05:33.051581       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:05:33.051642       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:05:46.808098       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:05:46.808284       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:05:49.672055       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:05:49.672224       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:06:11.103113       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:06:11.103160       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:06:12.927727       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:06:12.927829       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0723 14:06:14.671666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.865µs"
	
	
	==> kube-proxy [75690199f376c8ec0e9d47332def123ca3ae5d93465cbb0480901d8fd0e61c92] <==
	I0723 13:58:23.025071       1 server_linux.go:69] "Using iptables proxy"
	I0723 13:58:23.055845       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	I0723 13:58:23.144867       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 13:58:23.144912       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 13:58:23.144928       1 server_linux.go:165] "Using iptables Proxier"
	I0723 13:58:23.149761       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 13:58:23.149945       1 server.go:872] "Version info" version="v1.30.3"
	I0723 13:58:23.149957       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 13:58:23.151928       1 config.go:192] "Starting service config controller"
	I0723 13:58:23.151943       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 13:58:23.151979       1 config.go:101] "Starting endpoint slice config controller"
	I0723 13:58:23.151984       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 13:58:23.155386       1 config.go:319] "Starting node config controller"
	I0723 13:58:23.155394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 13:58:23.252675       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 13:58:23.252689       1 shared_informer.go:320] Caches are synced for service config
	I0723 13:58:23.256371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [395cd38ab3a5c3476a791448e639c2037a2a5a05d4de7364ad32f3f45094e9c1] <==
	W0723 13:58:05.015845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.016667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:05.015881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 13:58:05.016679       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 13:58:05.015913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 13:58:05.016690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 13:58:05.015946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 13:58:05.016701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 13:58:05.015983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.016714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:05.015565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0723 13:58:05.016726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0723 13:58:05.016923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 13:58:05.016975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 13:58:05.932434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.932488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:05.967100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:05.967142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:06.064347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 13:58:06.064392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 13:58:06.152152       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 13:58:06.152188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 13:58:06.456527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 13:58:06.457105       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 13:58:08.795481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:04:07 addons-566823 kubelet[1269]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:04:07 addons-566823 kubelet[1269]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:04:07 addons-566823 kubelet[1269]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:04:07 addons-566823 kubelet[1269]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:05:07 addons-566823 kubelet[1269]: E0723 14:05:07.808507    1269 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:05:07 addons-566823 kubelet[1269]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:05:07 addons-566823 kubelet[1269]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:05:07 addons-566823 kubelet[1269]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:05:07 addons-566823 kubelet[1269]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:06:07 addons-566823 kubelet[1269]: E0723 14:06:07.808227    1269 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:06:07 addons-566823 kubelet[1269]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:06:07 addons-566823 kubelet[1269]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:06:07 addons-566823 kubelet[1269]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:06:07 addons-566823 kubelet[1269]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:06:14 addons-566823 kubelet[1269]: I0723 14:06:14.707152    1269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-d7gff" podStartSLOduration=209.427045461 podStartE2EDuration="3m31.707117209s" podCreationTimestamp="2024-07-23 14:02:43 +0000 UTC" firstStartedPulling="2024-07-23 14:02:43.946117747 +0000 UTC m=+276.301643039" lastFinishedPulling="2024-07-23 14:02:46.226189495 +0000 UTC m=+278.581714787" observedRunningTime="2024-07-23 14:02:46.45643273 +0000 UTC m=+278.811958040" watchObservedRunningTime="2024-07-23 14:06:14.707117209 +0000 UTC m=+487.062642522"
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.114313    1269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmswm\" (UniqueName: \"kubernetes.io/projected/6b45f2b1-e48c-4097-aa53-5c2f5fea4806-kube-api-access-xmswm\") pod \"6b45f2b1-e48c-4097-aa53-5c2f5fea4806\" (UID: \"6b45f2b1-e48c-4097-aa53-5c2f5fea4806\") "
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.114387    1269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b45f2b1-e48c-4097-aa53-5c2f5fea4806-tmp-dir\") pod \"6b45f2b1-e48c-4097-aa53-5c2f5fea4806\" (UID: \"6b45f2b1-e48c-4097-aa53-5c2f5fea4806\") "
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.115129    1269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b45f2b1-e48c-4097-aa53-5c2f5fea4806-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6b45f2b1-e48c-4097-aa53-5c2f5fea4806" (UID: "6b45f2b1-e48c-4097-aa53-5c2f5fea4806"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.117665    1269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b45f2b1-e48c-4097-aa53-5c2f5fea4806-kube-api-access-xmswm" (OuterVolumeSpecName: "kube-api-access-xmswm") pod "6b45f2b1-e48c-4097-aa53-5c2f5fea4806" (UID: "6b45f2b1-e48c-4097-aa53-5c2f5fea4806"). InnerVolumeSpecName "kube-api-access-xmswm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.215182    1269 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xmswm\" (UniqueName: \"kubernetes.io/projected/6b45f2b1-e48c-4097-aa53-5c2f5fea4806-kube-api-access-xmswm\") on node \"addons-566823\" DevicePath \"\""
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.215226    1269 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b45f2b1-e48c-4097-aa53-5c2f5fea4806-tmp-dir\") on node \"addons-566823\" DevicePath \"\""
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.308769    1269 scope.go:117] "RemoveContainer" containerID="c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec"
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.346195    1269 scope.go:117] "RemoveContainer" containerID="c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec"
	Jul 23 14:06:16 addons-566823 kubelet[1269]: E0723 14:06:16.346834    1269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec\": container with ID starting with c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec not found: ID does not exist" containerID="c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec"
	Jul 23 14:06:16 addons-566823 kubelet[1269]: I0723 14:06:16.346869    1269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec"} err="failed to get container status \"c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec\": rpc error: code = NotFound desc = could not find container \"c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec\": container with ID starting with c618693edcba75b97028b4f611ce3e2a4e1fcd0b84ffbbc281d57f19f7f4adec not found: ID does not exist"
	
	
	==> storage-provisioner [16b95fc22c5420a2a81918eac8df5e8270210e81a078bac75dff90b9cae837ff] <==
	I0723 13:58:28.773519       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 13:58:29.054175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 13:58:29.054275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 13:58:29.206327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 13:58:29.249446       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad92d0f1-0afd-4aae-a180-a98760ca320f", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-566823_99dbb502-d042-4986-a2e6-ab50484211e6 became leader
	I0723 13:58:29.249551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-566823_99dbb502-d042-4986-a2e6-ab50484211e6!
	I0723 13:58:29.512096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-566823_99dbb502-d042-4986-a2e6-ab50484211e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-566823 -n addons-566823
helpers_test.go:261: (dbg) Run:  kubectl --context addons-566823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (362.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-566823
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-566823: exit status 82 (2m0.462140473s)

                                                
                                                
-- stdout --
	* Stopping node "addons-566823"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-566823" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-566823
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-566823: exit status 11 (21.473726539s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-566823" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-566823
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-566823: exit status 11 (6.143745875s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-566823" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-566823
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-566823: exit status 11 (6.144559669s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-566823" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 node stop m02 -v=7 --alsologtostderr
E0723 14:18:33.740698   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:19:49.700340   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:19:55.661308   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.48146393s)

                                                
                                                
-- stdout --
	* Stopping node "ha-533645-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:18:06.509000   34160 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:18:06.509298   34160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:18:06.509311   34160 out.go:304] Setting ErrFile to fd 2...
	I0723 14:18:06.509315   34160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:18:06.509557   34160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:18:06.509844   34160 mustload.go:65] Loading cluster: ha-533645
	I0723 14:18:06.510227   34160 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:18:06.510246   34160 stop.go:39] StopHost: ha-533645-m02
	I0723 14:18:06.510666   34160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:18:06.510707   34160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:18:06.527031   34160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0723 14:18:06.527494   34160 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:18:06.528074   34160 main.go:141] libmachine: Using API Version  1
	I0723 14:18:06.528098   34160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:18:06.528480   34160 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:18:06.531069   34160 out.go:177] * Stopping node "ha-533645-m02"  ...
	I0723 14:18:06.532616   34160 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0723 14:18:06.532649   34160 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:18:06.532884   34160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0723 14:18:06.532911   34160 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:18:06.536113   34160 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:18:06.536584   34160 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:18:06.536614   34160 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:18:06.536741   34160 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:18:06.536914   34160 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:18:06.537098   34160 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:18:06.537251   34160 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:18:06.626203   34160 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0723 14:18:06.679632   34160 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0723 14:18:06.735367   34160 main.go:141] libmachine: Stopping "ha-533645-m02"...
	I0723 14:18:06.735429   34160 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:18:06.736841   34160 main.go:141] libmachine: (ha-533645-m02) Calling .Stop
	I0723 14:18:06.740058   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 0/120
	I0723 14:18:07.741396   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 1/120
	I0723 14:18:08.742724   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 2/120
	I0723 14:18:09.745014   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 3/120
	I0723 14:18:10.746427   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 4/120
	I0723 14:18:11.748497   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 5/120
	I0723 14:18:12.750817   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 6/120
	I0723 14:18:13.752812   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 7/120
	I0723 14:18:14.754094   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 8/120
	I0723 14:18:15.755496   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 9/120
	I0723 14:18:16.757377   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 10/120
	I0723 14:18:17.758858   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 11/120
	I0723 14:18:18.760191   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 12/120
	I0723 14:18:19.761640   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 13/120
	I0723 14:18:20.763146   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 14/120
	I0723 14:18:21.765320   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 15/120
	I0723 14:18:22.766679   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 16/120
	I0723 14:18:23.768207   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 17/120
	I0723 14:18:24.770033   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 18/120
	I0723 14:18:25.772602   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 19/120
	I0723 14:18:26.774186   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 20/120
	I0723 14:18:27.776050   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 21/120
	I0723 14:18:28.777549   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 22/120
	I0723 14:18:29.779328   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 23/120
	I0723 14:18:30.781044   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 24/120
	I0723 14:18:31.783141   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 25/120
	I0723 14:18:32.784525   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 26/120
	I0723 14:18:33.785985   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 27/120
	I0723 14:18:34.788299   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 28/120
	I0723 14:18:35.789652   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 29/120
	I0723 14:18:36.792071   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 30/120
	I0723 14:18:37.793656   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 31/120
	I0723 14:18:38.795237   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 32/120
	I0723 14:18:39.796960   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 33/120
	I0723 14:18:40.798960   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 34/120
	I0723 14:18:41.800164   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 35/120
	I0723 14:18:42.802524   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 36/120
	I0723 14:18:43.804871   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 37/120
	I0723 14:18:44.806278   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 38/120
	I0723 14:18:45.807732   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 39/120
	I0723 14:18:46.810005   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 40/120
	I0723 14:18:47.812240   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 41/120
	I0723 14:18:48.813886   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 42/120
	I0723 14:18:49.815248   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 43/120
	I0723 14:18:50.816858   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 44/120
	I0723 14:18:51.819354   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 45/120
	I0723 14:18:52.820816   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 46/120
	I0723 14:18:53.822229   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 47/120
	I0723 14:18:54.823640   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 48/120
	I0723 14:18:55.825510   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 49/120
	I0723 14:18:56.827600   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 50/120
	I0723 14:18:57.828946   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 51/120
	I0723 14:18:58.830149   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 52/120
	I0723 14:18:59.831422   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 53/120
	I0723 14:19:00.832857   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 54/120
	I0723 14:19:01.834775   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 55/120
	I0723 14:19:02.836239   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 56/120
	I0723 14:19:03.837462   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 57/120
	I0723 14:19:04.838848   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 58/120
	I0723 14:19:05.840921   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 59/120
	I0723 14:19:06.842660   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 60/120
	I0723 14:19:07.845206   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 61/120
	I0723 14:19:08.846690   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 62/120
	I0723 14:19:09.847987   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 63/120
	I0723 14:19:10.850160   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 64/120
	I0723 14:19:11.852430   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 65/120
	I0723 14:19:12.854349   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 66/120
	I0723 14:19:13.856532   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 67/120
	I0723 14:19:14.857968   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 68/120
	I0723 14:19:15.859461   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 69/120
	I0723 14:19:16.861683   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 70/120
	I0723 14:19:17.863100   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 71/120
	I0723 14:19:18.864823   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 72/120
	I0723 14:19:19.866442   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 73/120
	I0723 14:19:20.867850   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 74/120
	I0723 14:19:21.869697   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 75/120
	I0723 14:19:22.871417   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 76/120
	I0723 14:19:23.873020   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 77/120
	I0723 14:19:24.874551   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 78/120
	I0723 14:19:25.876756   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 79/120
	I0723 14:19:26.878822   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 80/120
	I0723 14:19:27.880355   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 81/120
	I0723 14:19:28.881666   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 82/120
	I0723 14:19:29.883106   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 83/120
	I0723 14:19:30.885424   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 84/120
	I0723 14:19:31.887489   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 85/120
	I0723 14:19:32.889056   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 86/120
	I0723 14:19:33.890399   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 87/120
	I0723 14:19:34.892132   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 88/120
	I0723 14:19:35.894230   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 89/120
	I0723 14:19:36.895966   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 90/120
	I0723 14:19:37.897385   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 91/120
	I0723 14:19:38.898982   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 92/120
	I0723 14:19:39.900820   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 93/120
	I0723 14:19:40.902130   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 94/120
	I0723 14:19:41.904212   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 95/120
	I0723 14:19:42.905783   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 96/120
	I0723 14:19:43.908050   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 97/120
	I0723 14:19:44.909438   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 98/120
	I0723 14:19:45.910742   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 99/120
	I0723 14:19:46.912912   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 100/120
	I0723 14:19:47.914354   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 101/120
	I0723 14:19:48.915993   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 102/120
	I0723 14:19:49.917867   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 103/120
	I0723 14:19:50.919210   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 104/120
	I0723 14:19:51.920767   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 105/120
	I0723 14:19:52.922540   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 106/120
	I0723 14:19:53.924826   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 107/120
	I0723 14:19:54.926345   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 108/120
	I0723 14:19:55.927842   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 109/120
	I0723 14:19:56.929869   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 110/120
	I0723 14:19:57.931282   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 111/120
	I0723 14:19:58.933078   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 112/120
	I0723 14:19:59.935402   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 113/120
	I0723 14:20:00.937108   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 114/120
	I0723 14:20:01.939455   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 115/120
	I0723 14:20:02.941026   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 116/120
	I0723 14:20:03.942678   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 117/120
	I0723 14:20:04.945035   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 118/120
	I0723 14:20:05.946510   34160 main.go:141] libmachine: (ha-533645-m02) Waiting for machine to stop 119/120
	I0723 14:20:06.947988   34160 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0723 14:20:06.948131   34160 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-533645 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (19.070068714s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:20:06.989612   34584 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:20:06.989836   34584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:06.989843   34584 out.go:304] Setting ErrFile to fd 2...
	I0723 14:20:06.989847   34584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:06.990086   34584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:20:06.990261   34584 out.go:298] Setting JSON to false
	I0723 14:20:06.990286   34584 mustload.go:65] Loading cluster: ha-533645
	I0723 14:20:06.990330   34584 notify.go:220] Checking for updates...
	I0723 14:20:06.990712   34584 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:20:06.990734   34584 status.go:255] checking status of ha-533645 ...
	I0723 14:20:06.991152   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:06.991213   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:07.010280   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0723 14:20:07.010676   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:07.011192   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:07.011212   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:07.011584   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:07.011793   34584 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:20:07.013351   34584 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:20:07.013371   34584 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:07.013770   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:07.013812   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:07.028547   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0723 14:20:07.028932   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:07.029382   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:07.029411   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:07.029926   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:07.030120   34584 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:20:07.033171   34584 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:07.033668   34584 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:07.033701   34584 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:07.033868   34584 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:07.034190   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:07.034236   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:07.049618   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0723 14:20:07.050027   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:07.050515   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:07.050542   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:07.050930   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:07.051104   34584 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:20:07.051316   34584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:07.051335   34584 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:20:07.053953   34584 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:07.054320   34584 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:07.054352   34584 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:07.054506   34584 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:20:07.054689   34584 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:20:07.054820   34584 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:20:07.054948   34584 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:20:07.145231   34584 ssh_runner.go:195] Run: systemctl --version
	I0723 14:20:07.151755   34584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:07.167606   34584 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:07.167635   34584 api_server.go:166] Checking apiserver status ...
	I0723 14:20:07.167679   34584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:07.183925   34584 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:20:07.193992   34584 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:07.194043   34584 ssh_runner.go:195] Run: ls
	I0723 14:20:07.199102   34584 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:07.203545   34584 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:07.203575   34584 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:20:07.203585   34584 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:07.203611   34584 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:20:07.203916   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:07.203947   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:07.219645   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0723 14:20:07.220065   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:07.220546   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:07.220568   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:07.220857   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:07.221063   34584 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:20:07.222724   34584 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:20:07.222742   34584 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:07.223067   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:07.223104   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:07.237966   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0723 14:20:07.238444   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:07.238985   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:07.239006   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:07.239369   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:07.239576   34584 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:20:07.243078   34584 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:07.243473   34584 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:07.243501   34584 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:07.243609   34584 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:07.243897   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:07.243944   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:07.258915   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0723 14:20:07.259384   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:07.259779   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:07.259839   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:07.260126   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:07.260325   34584 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:20:07.260512   34584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:07.260533   34584 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:20:07.263840   34584 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:07.264350   34584 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:07.264377   34584 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:07.264491   34584 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:20:07.264665   34584 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:20:07.264811   34584 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:20:07.264923   34584 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	W0723 14:20:25.662578   34584 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:20:25.662675   34584 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	E0723 14:20:25.662691   34584 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:25.662700   34584 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:20:25.662717   34584 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:25.662724   34584 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:20:25.663029   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:25.663120   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:25.678416   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41261
	I0723 14:20:25.678967   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:25.679484   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:25.679500   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:25.679803   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:25.679986   34584 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:20:25.681690   34584 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:20:25.681708   34584 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:25.682056   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:25.682091   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:25.696730   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I0723 14:20:25.697083   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:25.697485   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:25.697512   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:25.697861   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:25.698040   34584 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:20:25.701181   34584 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:25.701746   34584 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:25.701774   34584 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:25.701964   34584 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:25.702256   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:25.702288   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:25.717004   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
	I0723 14:20:25.717392   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:25.717825   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:25.717845   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:25.718157   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:25.718462   34584 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:20:25.718695   34584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:25.718719   34584 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:20:25.721362   34584 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:25.721836   34584 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:25.721863   34584 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:25.721997   34584 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:20:25.722164   34584 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:20:25.722356   34584 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:20:25.722568   34584 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:20:25.803778   34584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:25.823754   34584 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:25.823778   34584 api_server.go:166] Checking apiserver status ...
	I0723 14:20:25.823812   34584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:25.839529   34584 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:20:25.848246   34584 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:25.848326   34584 ssh_runner.go:195] Run: ls
	I0723 14:20:25.852625   34584 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:25.860949   34584 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:25.860969   34584 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:20:25.860977   34584 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:25.860991   34584 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:20:25.861356   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:25.861394   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:25.875909   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0723 14:20:25.876349   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:25.876853   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:25.876874   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:25.877169   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:25.877365   34584 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:20:25.878962   34584 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:20:25.878978   34584 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:25.879268   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:25.879322   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:25.893958   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0723 14:20:25.894424   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:25.894872   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:25.894889   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:25.895268   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:25.895496   34584 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:20:25.898446   34584 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:25.898914   34584 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:25.898928   34584 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:25.899117   34584 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:25.899563   34584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:25.899603   34584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:25.914351   34584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0723 14:20:25.914807   34584 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:25.915263   34584 main.go:141] libmachine: Using API Version  1
	I0723 14:20:25.915285   34584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:25.915578   34584 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:25.915750   34584 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:20:25.915926   34584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:25.915942   34584 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:20:25.918905   34584 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:25.919375   34584 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:25.919403   34584 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:25.919571   34584 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:20:25.919760   34584 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:20:25.919886   34584 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:20:25.920030   34584 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:20:25.999827   34584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:26.017820   34584 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-533645 -n ha-533645
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-533645 logs -n 25: (1.369868526s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645:/home/docker/cp-test_ha-533645-m03_ha-533645.txt                      |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645 sudo cat                                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645.txt                                |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m02:/home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m04 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp testdata/cp-test.txt                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645:/home/docker/cp-test_ha-533645-m04_ha-533645.txt                      |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645 sudo cat                                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645.txt                                |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m02:/home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03:/home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m03 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-533645 node stop m02 -v=7                                                    | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:12:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:12:58.672274   29532 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:12:58.672396   29532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:58.672405   29532 out.go:304] Setting ErrFile to fd 2...
	I0723 14:12:58.672410   29532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:58.672592   29532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:12:58.673181   29532 out.go:298] Setting JSON to false
	I0723 14:12:58.674012   29532 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3325,"bootTime":1721740654,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:12:58.674070   29532 start.go:139] virtualization: kvm guest
	I0723 14:12:58.676433   29532 out.go:177] * [ha-533645] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 14:12:58.677903   29532 notify.go:220] Checking for updates...
	I0723 14:12:58.677916   29532 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:12:58.679517   29532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:12:58.680865   29532 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:12:58.682045   29532 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:58.683336   29532 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:12:58.684490   29532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:12:58.685826   29532 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:12:58.719886   29532 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 14:12:58.721256   29532 start.go:297] selected driver: kvm2
	I0723 14:12:58.721288   29532 start.go:901] validating driver "kvm2" against <nil>
	I0723 14:12:58.721309   29532 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:12:58.722079   29532 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:12:58.722169   29532 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 14:12:58.736944   29532 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 14:12:58.736992   29532 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 14:12:58.737216   29532 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:12:58.737300   29532 cni.go:84] Creating CNI manager for ""
	I0723 14:12:58.737313   29532 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0723 14:12:58.737320   29532 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 14:12:58.737371   29532 start.go:340] cluster config:
	{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0723 14:12:58.737466   29532 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:12:58.739356   29532 out.go:177] * Starting "ha-533645" primary control-plane node in "ha-533645" cluster
	I0723 14:12:58.740608   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:12:58.740643   29532 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 14:12:58.740661   29532 cache.go:56] Caching tarball of preloaded images
	I0723 14:12:58.740724   29532 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:12:58.740734   29532 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:12:58.741010   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:12:58.741028   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json: {Name:mk8b3be7d33f3876fb077f6ec49a9ae7625ff727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:12:58.741160   29532 start.go:360] acquireMachinesLock for ha-533645: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:12:58.741188   29532 start.go:364] duration metric: took 16.714µs to acquireMachinesLock for "ha-533645"
	I0723 14:12:58.741203   29532 start.go:93] Provisioning new machine with config: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:12:58.741258   29532 start.go:125] createHost starting for "" (driver="kvm2")
	I0723 14:12:58.742731   29532 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 14:12:58.742853   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:12:58.742885   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:12:58.757854   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0723 14:12:58.758290   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:12:58.758822   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:12:58.758845   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:12:58.759180   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:12:58.759420   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:12:58.759561   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:12:58.759673   29532 start.go:159] libmachine.API.Create for "ha-533645" (driver="kvm2")
	I0723 14:12:58.759702   29532 client.go:168] LocalClient.Create starting
	I0723 14:12:58.759736   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 14:12:58.759768   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:12:58.759790   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:12:58.759861   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 14:12:58.759884   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:12:58.759901   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:12:58.759936   29532 main.go:141] libmachine: Running pre-create checks...
	I0723 14:12:58.759948   29532 main.go:141] libmachine: (ha-533645) Calling .PreCreateCheck
	I0723 14:12:58.760329   29532 main.go:141] libmachine: (ha-533645) Calling .GetConfigRaw
	I0723 14:12:58.760736   29532 main.go:141] libmachine: Creating machine...
	I0723 14:12:58.760752   29532 main.go:141] libmachine: (ha-533645) Calling .Create
	I0723 14:12:58.760880   29532 main.go:141] libmachine: (ha-533645) Creating KVM machine...
	I0723 14:12:58.762130   29532 main.go:141] libmachine: (ha-533645) DBG | found existing default KVM network
	I0723 14:12:58.762820   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:58.762698   29555 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0723 14:12:58.762838   29532 main.go:141] libmachine: (ha-533645) DBG | created network xml: 
	I0723 14:12:58.762856   29532 main.go:141] libmachine: (ha-533645) DBG | <network>
	I0723 14:12:58.762867   29532 main.go:141] libmachine: (ha-533645) DBG |   <name>mk-ha-533645</name>
	I0723 14:12:58.762875   29532 main.go:141] libmachine: (ha-533645) DBG |   <dns enable='no'/>
	I0723 14:12:58.762882   29532 main.go:141] libmachine: (ha-533645) DBG |   
	I0723 14:12:58.762893   29532 main.go:141] libmachine: (ha-533645) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0723 14:12:58.762903   29532 main.go:141] libmachine: (ha-533645) DBG |     <dhcp>
	I0723 14:12:58.762929   29532 main.go:141] libmachine: (ha-533645) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0723 14:12:58.762947   29532 main.go:141] libmachine: (ha-533645) DBG |     </dhcp>
	I0723 14:12:58.762956   29532 main.go:141] libmachine: (ha-533645) DBG |   </ip>
	I0723 14:12:58.762970   29532 main.go:141] libmachine: (ha-533645) DBG |   
	I0723 14:12:58.762997   29532 main.go:141] libmachine: (ha-533645) DBG | </network>
	I0723 14:12:58.763014   29532 main.go:141] libmachine: (ha-533645) DBG | 
	I0723 14:12:58.767859   29532 main.go:141] libmachine: (ha-533645) DBG | trying to create private KVM network mk-ha-533645 192.168.39.0/24...
	I0723 14:12:58.841046   29532 main.go:141] libmachine: (ha-533645) DBG | private KVM network mk-ha-533645 192.168.39.0/24 created
	I0723 14:12:58.841154   29532 main.go:141] libmachine: (ha-533645) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645 ...
	I0723 14:12:58.841168   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:58.841006   29555 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:58.841179   29532 main.go:141] libmachine: (ha-533645) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 14:12:58.841267   29532 main.go:141] libmachine: (ha-533645) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 14:12:59.077944   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:59.077811   29555 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa...
	I0723 14:12:59.183323   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:59.183169   29555 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/ha-533645.rawdisk...
	I0723 14:12:59.183379   29532 main.go:141] libmachine: (ha-533645) DBG | Writing magic tar header
	I0723 14:12:59.183404   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645 (perms=drwx------)
	I0723 14:12:59.183432   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 14:12:59.183440   29532 main.go:141] libmachine: (ha-533645) DBG | Writing SSH key tar header
	I0723 14:12:59.183452   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:59.183278   29555 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645 ...
	I0723 14:12:59.183459   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645
	I0723 14:12:59.183467   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 14:12:59.183474   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:59.183484   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 14:12:59.183491   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 14:12:59.183501   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins
	I0723 14:12:59.183513   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home
	I0723 14:12:59.183524   29532 main.go:141] libmachine: (ha-533645) DBG | Skipping /home - not owner
	I0723 14:12:59.183535   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 14:12:59.183549   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 14:12:59.183558   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 14:12:59.183595   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 14:12:59.183620   29532 main.go:141] libmachine: (ha-533645) Creating domain...
	I0723 14:12:59.184578   29532 main.go:141] libmachine: (ha-533645) define libvirt domain using xml: 
	I0723 14:12:59.184594   29532 main.go:141] libmachine: (ha-533645) <domain type='kvm'>
	I0723 14:12:59.184601   29532 main.go:141] libmachine: (ha-533645)   <name>ha-533645</name>
	I0723 14:12:59.184609   29532 main.go:141] libmachine: (ha-533645)   <memory unit='MiB'>2200</memory>
	I0723 14:12:59.184624   29532 main.go:141] libmachine: (ha-533645)   <vcpu>2</vcpu>
	I0723 14:12:59.184634   29532 main.go:141] libmachine: (ha-533645)   <features>
	I0723 14:12:59.184641   29532 main.go:141] libmachine: (ha-533645)     <acpi/>
	I0723 14:12:59.184646   29532 main.go:141] libmachine: (ha-533645)     <apic/>
	I0723 14:12:59.184658   29532 main.go:141] libmachine: (ha-533645)     <pae/>
	I0723 14:12:59.184672   29532 main.go:141] libmachine: (ha-533645)     
	I0723 14:12:59.184680   29532 main.go:141] libmachine: (ha-533645)   </features>
	I0723 14:12:59.184688   29532 main.go:141] libmachine: (ha-533645)   <cpu mode='host-passthrough'>
	I0723 14:12:59.184706   29532 main.go:141] libmachine: (ha-533645)   
	I0723 14:12:59.184730   29532 main.go:141] libmachine: (ha-533645)   </cpu>
	I0723 14:12:59.184738   29532 main.go:141] libmachine: (ha-533645)   <os>
	I0723 14:12:59.184743   29532 main.go:141] libmachine: (ha-533645)     <type>hvm</type>
	I0723 14:12:59.184753   29532 main.go:141] libmachine: (ha-533645)     <boot dev='cdrom'/>
	I0723 14:12:59.184759   29532 main.go:141] libmachine: (ha-533645)     <boot dev='hd'/>
	I0723 14:12:59.184765   29532 main.go:141] libmachine: (ha-533645)     <bootmenu enable='no'/>
	I0723 14:12:59.184770   29532 main.go:141] libmachine: (ha-533645)   </os>
	I0723 14:12:59.184776   29532 main.go:141] libmachine: (ha-533645)   <devices>
	I0723 14:12:59.184782   29532 main.go:141] libmachine: (ha-533645)     <disk type='file' device='cdrom'>
	I0723 14:12:59.184790   29532 main.go:141] libmachine: (ha-533645)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/boot2docker.iso'/>
	I0723 14:12:59.184796   29532 main.go:141] libmachine: (ha-533645)       <target dev='hdc' bus='scsi'/>
	I0723 14:12:59.184814   29532 main.go:141] libmachine: (ha-533645)       <readonly/>
	I0723 14:12:59.184832   29532 main.go:141] libmachine: (ha-533645)     </disk>
	I0723 14:12:59.184845   29532 main.go:141] libmachine: (ha-533645)     <disk type='file' device='disk'>
	I0723 14:12:59.184857   29532 main.go:141] libmachine: (ha-533645)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 14:12:59.184872   29532 main.go:141] libmachine: (ha-533645)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/ha-533645.rawdisk'/>
	I0723 14:12:59.184884   29532 main.go:141] libmachine: (ha-533645)       <target dev='hda' bus='virtio'/>
	I0723 14:12:59.184895   29532 main.go:141] libmachine: (ha-533645)     </disk>
	I0723 14:12:59.184908   29532 main.go:141] libmachine: (ha-533645)     <interface type='network'>
	I0723 14:12:59.184921   29532 main.go:141] libmachine: (ha-533645)       <source network='mk-ha-533645'/>
	I0723 14:12:59.184931   29532 main.go:141] libmachine: (ha-533645)       <model type='virtio'/>
	I0723 14:12:59.184942   29532 main.go:141] libmachine: (ha-533645)     </interface>
	I0723 14:12:59.184951   29532 main.go:141] libmachine: (ha-533645)     <interface type='network'>
	I0723 14:12:59.184963   29532 main.go:141] libmachine: (ha-533645)       <source network='default'/>
	I0723 14:12:59.184974   29532 main.go:141] libmachine: (ha-533645)       <model type='virtio'/>
	I0723 14:12:59.184990   29532 main.go:141] libmachine: (ha-533645)     </interface>
	I0723 14:12:59.185000   29532 main.go:141] libmachine: (ha-533645)     <serial type='pty'>
	I0723 14:12:59.185011   29532 main.go:141] libmachine: (ha-533645)       <target port='0'/>
	I0723 14:12:59.185019   29532 main.go:141] libmachine: (ha-533645)     </serial>
	I0723 14:12:59.185030   29532 main.go:141] libmachine: (ha-533645)     <console type='pty'>
	I0723 14:12:59.185044   29532 main.go:141] libmachine: (ha-533645)       <target type='serial' port='0'/>
	I0723 14:12:59.185072   29532 main.go:141] libmachine: (ha-533645)     </console>
	I0723 14:12:59.185081   29532 main.go:141] libmachine: (ha-533645)     <rng model='virtio'>
	I0723 14:12:59.185094   29532 main.go:141] libmachine: (ha-533645)       <backend model='random'>/dev/random</backend>
	I0723 14:12:59.185102   29532 main.go:141] libmachine: (ha-533645)     </rng>
	I0723 14:12:59.185118   29532 main.go:141] libmachine: (ha-533645)     
	I0723 14:12:59.185128   29532 main.go:141] libmachine: (ha-533645)     
	I0723 14:12:59.185139   29532 main.go:141] libmachine: (ha-533645)   </devices>
	I0723 14:12:59.185145   29532 main.go:141] libmachine: (ha-533645) </domain>
	I0723 14:12:59.185152   29532 main.go:141] libmachine: (ha-533645) 
	I0723 14:12:59.189460   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:d7:33:3e in network default
	I0723 14:12:59.189915   29532 main.go:141] libmachine: (ha-533645) Ensuring networks are active...
	I0723 14:12:59.189930   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:12:59.190515   29532 main.go:141] libmachine: (ha-533645) Ensuring network default is active
	I0723 14:12:59.190817   29532 main.go:141] libmachine: (ha-533645) Ensuring network mk-ha-533645 is active
	I0723 14:12:59.191444   29532 main.go:141] libmachine: (ha-533645) Getting domain xml...
	I0723 14:12:59.192254   29532 main.go:141] libmachine: (ha-533645) Creating domain...
	I0723 14:13:00.367182   29532 main.go:141] libmachine: (ha-533645) Waiting to get IP...
	I0723 14:13:00.367830   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:00.368289   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:00.368309   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:00.368263   29555 retry.go:31] will retry after 233.748173ms: waiting for machine to come up
	I0723 14:13:00.603785   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:00.604342   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:00.604373   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:00.604301   29555 retry.go:31] will retry after 286.19202ms: waiting for machine to come up
	I0723 14:13:00.891818   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:00.892280   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:00.892312   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:00.892242   29555 retry.go:31] will retry after 451.009456ms: waiting for machine to come up
	I0723 14:13:01.344946   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:01.345381   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:01.345407   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:01.345355   29555 retry.go:31] will retry after 553.896723ms: waiting for machine to come up
	I0723 14:13:01.901183   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:01.901698   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:01.901726   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:01.901658   29555 retry.go:31] will retry after 573.029693ms: waiting for machine to come up
	I0723 14:13:02.476534   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:02.476957   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:02.476983   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:02.476912   29555 retry.go:31] will retry after 687.916409ms: waiting for machine to come up
	I0723 14:13:03.166977   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:03.167398   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:03.167425   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:03.167351   29555 retry.go:31] will retry after 1.032404149s: waiting for machine to come up
	I0723 14:13:04.201178   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:04.202182   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:04.202205   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:04.202127   29555 retry.go:31] will retry after 1.12337681s: waiting for machine to come up
	I0723 14:13:05.326795   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:05.327203   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:05.327232   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:05.327158   29555 retry.go:31] will retry after 1.320525567s: waiting for machine to come up
	I0723 14:13:06.649527   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:06.649867   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:06.649886   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:06.649819   29555 retry.go:31] will retry after 2.047276994s: waiting for machine to come up
	I0723 14:13:08.699610   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:08.700095   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:08.700128   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:08.700043   29555 retry.go:31] will retry after 2.504888725s: waiting for machine to come up
	I0723 14:13:11.208286   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:11.208682   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:11.208711   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:11.208634   29555 retry.go:31] will retry after 3.516838711s: waiting for machine to come up
	I0723 14:13:14.727069   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:14.727433   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:14.727466   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:14.727385   29555 retry.go:31] will retry after 3.819451455s: waiting for machine to come up
	I0723 14:13:18.551305   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.551720   29532 main.go:141] libmachine: (ha-533645) Found IP for machine: 192.168.39.103
	I0723 14:13:18.551742   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has current primary IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.551749   29532 main.go:141] libmachine: (ha-533645) Reserving static IP address...
	I0723 14:13:18.552061   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find host DHCP lease matching {name: "ha-533645", mac: "52:54:00:a6:b1:de", ip: "192.168.39.103"} in network mk-ha-533645
	I0723 14:13:18.623653   29532 main.go:141] libmachine: (ha-533645) DBG | Getting to WaitForSSH function...
	I0723 14:13:18.623681   29532 main.go:141] libmachine: (ha-533645) Reserved static IP address: 192.168.39.103
	I0723 14:13:18.623695   29532 main.go:141] libmachine: (ha-533645) Waiting for SSH to be available...
	I0723 14:13:18.625925   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.626286   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.626312   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.626489   29532 main.go:141] libmachine: (ha-533645) DBG | Using SSH client type: external
	I0723 14:13:18.626516   29532 main.go:141] libmachine: (ha-533645) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa (-rw-------)
	I0723 14:13:18.626553   29532 main.go:141] libmachine: (ha-533645) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 14:13:18.626567   29532 main.go:141] libmachine: (ha-533645) DBG | About to run SSH command:
	I0723 14:13:18.626581   29532 main.go:141] libmachine: (ha-533645) DBG | exit 0
	I0723 14:13:18.758328   29532 main.go:141] libmachine: (ha-533645) DBG | SSH cmd err, output: <nil>: 
	I0723 14:13:18.758648   29532 main.go:141] libmachine: (ha-533645) KVM machine creation complete!
	I0723 14:13:18.758933   29532 main.go:141] libmachine: (ha-533645) Calling .GetConfigRaw
	I0723 14:13:18.759466   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:18.759692   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:18.759871   29532 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 14:13:18.759909   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:18.761057   29532 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 14:13:18.761075   29532 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 14:13:18.761091   29532 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 14:13:18.761100   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:18.763501   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.763869   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.763888   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.764092   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:18.764245   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.764400   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.764550   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:18.764771   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:18.764959   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:18.764969   29532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 14:13:18.877574   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:13:18.877597   29532 main.go:141] libmachine: Detecting the provisioner...
	I0723 14:13:18.877605   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:18.880535   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.880887   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.880928   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.881069   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:18.881257   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.881431   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.881604   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:18.881789   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:18.881963   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:18.881974   29532 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 14:13:18.994799   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 14:13:18.994882   29532 main.go:141] libmachine: found compatible host: buildroot
	I0723 14:13:18.994896   29532 main.go:141] libmachine: Provisioning with buildroot...
	I0723 14:13:18.994907   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:13:18.995128   29532 buildroot.go:166] provisioning hostname "ha-533645"
	I0723 14:13:18.995160   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:13:18.995337   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:18.997998   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.998333   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.998360   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.998527   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:18.998682   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.998814   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.998906   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:18.999009   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:18.999221   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:18.999235   29532 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645 && echo "ha-533645" | sudo tee /etc/hostname
	I0723 14:13:19.128271   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645
	
	I0723 14:13:19.128312   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.130983   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.131417   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.131446   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.131603   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:19.131806   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.131959   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.132097   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:19.132278   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:19.132491   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:19.132509   29532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:13:19.256881   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:13:19.256908   29532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:13:19.256958   29532 buildroot.go:174] setting up certificates
	I0723 14:13:19.256970   29532 provision.go:84] configureAuth start
	I0723 14:13:19.256980   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:13:19.257259   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:19.260049   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.260464   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.260489   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.260631   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.262752   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.263139   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.263164   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.263292   29532 provision.go:143] copyHostCerts
	I0723 14:13:19.263352   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:13:19.263398   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:13:19.263410   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:13:19.263486   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:13:19.263596   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:13:19.263624   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:13:19.263632   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:13:19.263675   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:13:19.263737   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:13:19.263760   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:13:19.263768   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:13:19.263799   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:13:19.263868   29532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645 san=[127.0.0.1 192.168.39.103 ha-533645 localhost minikube]
	I0723 14:13:19.813421   29532 provision.go:177] copyRemoteCerts
	I0723 14:13:19.813491   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:13:19.813515   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.816359   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.816799   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.816826   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.817027   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:19.817246   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.817440   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:19.817562   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:19.904061   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:13:19.904138   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:13:19.927488   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:13:19.927553   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0723 14:13:19.949586   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:13:19.949646   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 14:13:19.971467   29532 provision.go:87] duration metric: took 714.485733ms to configureAuth
	I0723 14:13:19.971489   29532 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:13:19.971682   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:13:19.971751   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.974778   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.975112   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.975155   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.975291   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:19.975522   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.975706   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.975830   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:19.976045   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:19.976223   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:19.976242   29532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:13:20.253784   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:13:20.253809   29532 main.go:141] libmachine: Checking connection to Docker...
	I0723 14:13:20.253818   29532 main.go:141] libmachine: (ha-533645) Calling .GetURL
	I0723 14:13:20.255041   29532 main.go:141] libmachine: (ha-533645) DBG | Using libvirt version 6000000
	I0723 14:13:20.257204   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.257568   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.257603   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.257770   29532 main.go:141] libmachine: Docker is up and running!
	I0723 14:13:20.257785   29532 main.go:141] libmachine: Reticulating splines...
	I0723 14:13:20.257792   29532 client.go:171] duration metric: took 21.498079198s to LocalClient.Create
	I0723 14:13:20.257819   29532 start.go:167] duration metric: took 21.49814807s to libmachine.API.Create "ha-533645"
	I0723 14:13:20.257827   29532 start.go:293] postStartSetup for "ha-533645" (driver="kvm2")
	I0723 14:13:20.257836   29532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:13:20.257851   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.258057   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:13:20.258078   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.260109   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.260423   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.260441   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.260489   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.260632   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.260757   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.260893   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:20.349477   29532 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:13:20.353478   29532 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:13:20.353498   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:13:20.353581   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:13:20.353670   29532 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:13:20.353681   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:13:20.353787   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:13:20.363689   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:13:20.386551   29532 start.go:296] duration metric: took 128.692671ms for postStartSetup
	I0723 14:13:20.386625   29532 main.go:141] libmachine: (ha-533645) Calling .GetConfigRaw
	I0723 14:13:20.387156   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:20.389939   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.390372   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.390419   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.390644   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:13:20.390824   29532 start.go:128] duration metric: took 21.649555719s to createHost
	I0723 14:13:20.390846   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.393022   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.393337   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.393368   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.393515   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.393711   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.393892   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.394045   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.394236   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:20.394426   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:20.394441   29532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:13:20.506831   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744000.482293360
	
	I0723 14:13:20.506853   29532 fix.go:216] guest clock: 1721744000.482293360
	I0723 14:13:20.506865   29532 fix.go:229] Guest: 2024-07-23 14:13:20.48229336 +0000 UTC Remote: 2024-07-23 14:13:20.390836223 +0000 UTC m=+21.751704249 (delta=91.457137ms)
	I0723 14:13:20.506915   29532 fix.go:200] guest clock delta is within tolerance: 91.457137ms
	I0723 14:13:20.506923   29532 start.go:83] releasing machines lock for "ha-533645", held for 21.76572613s
	I0723 14:13:20.506949   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.507189   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:20.509580   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.509983   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.510015   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.510240   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.510782   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.510956   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.511028   29532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:13:20.511087   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.511186   29532 ssh_runner.go:195] Run: cat /version.json
	I0723 14:13:20.511210   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.513410   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.513685   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.513710   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.513796   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.513888   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.514054   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.514197   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.514227   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.514272   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.514308   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:20.514395   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.514580   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.514730   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.514864   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:20.595339   29532 ssh_runner.go:195] Run: systemctl --version
	I0723 14:13:20.628867   29532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:13:20.784107   29532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:13:20.789943   29532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:13:20.790008   29532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:13:20.805053   29532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 14:13:20.805072   29532 start.go:495] detecting cgroup driver to use...
	I0723 14:13:20.805139   29532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:13:20.820000   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:13:20.832376   29532 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:13:20.832438   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:13:20.845699   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:13:20.858830   29532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:13:20.972567   29532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:13:21.107566   29532 docker.go:233] disabling docker service ...
	I0723 14:13:21.107632   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:13:21.121555   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:13:21.134136   29532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:13:21.262624   29532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:13:21.391783   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:13:21.404689   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:13:21.421455   29532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:13:21.421518   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.431023   29532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:13:21.431075   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.440711   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.450208   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.459592   29532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:13:21.469581   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.479380   29532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.495105   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.504735   29532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:13:21.513466   29532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 14:13:21.513514   29532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 14:13:21.526071   29532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:13:21.534984   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:13:21.641089   29532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:13:21.773861   29532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:13:21.773940   29532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:13:21.778588   29532 start.go:563] Will wait 60s for crictl version
	I0723 14:13:21.778652   29532 ssh_runner.go:195] Run: which crictl
	I0723 14:13:21.782156   29532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:13:21.819340   29532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:13:21.819414   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:13:21.850001   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:13:21.878625   29532 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:13:21.880044   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:21.883002   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:21.883375   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:21.883407   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:21.883591   29532 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:13:21.887590   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:13:21.900122   29532 kubeadm.go:883] updating cluster {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:13:21.900247   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:13:21.900324   29532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:13:21.932197   29532 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 14:13:21.932271   29532 ssh_runner.go:195] Run: which lz4
	I0723 14:13:21.935844   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0723 14:13:21.935943   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 14:13:21.939680   29532 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 14:13:21.939714   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 14:13:23.123318   29532 crio.go:462] duration metric: took 1.187404654s to copy over tarball
	I0723 14:13:23.123381   29532 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 14:13:25.188987   29532 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06558121s)
	I0723 14:13:25.189014   29532 crio.go:469] duration metric: took 2.065669362s to extract the tarball
	I0723 14:13:25.189023   29532 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 14:13:25.225220   29532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:13:25.266110   29532 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:13:25.266131   29532 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:13:25.266141   29532 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.30.3 crio true true} ...
	I0723 14:13:25.266252   29532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:13:25.266331   29532 ssh_runner.go:195] Run: crio config
	I0723 14:13:25.313634   29532 cni.go:84] Creating CNI manager for ""
	I0723 14:13:25.313655   29532 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0723 14:13:25.313664   29532 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:13:25.313685   29532 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-533645 NodeName:ha-533645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:13:25.313815   29532 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-533645"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:13:25.313836   29532 kube-vip.go:115] generating kube-vip config ...
	I0723 14:13:25.313875   29532 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:13:25.328705   29532 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:13:25.328808   29532 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:13:25.328861   29532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:13:25.337965   29532 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:13:25.338025   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0723 14:13:25.346714   29532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0723 14:13:25.361425   29532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:13:25.375921   29532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0723 14:13:25.391070   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0723 14:13:25.405958   29532 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:13:25.409434   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:13:25.420629   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:13:25.547842   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:13:25.564142   29532 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.103
	I0723 14:13:25.564165   29532 certs.go:194] generating shared ca certs ...
	I0723 14:13:25.564184   29532 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:25.564334   29532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:13:25.564399   29532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:13:25.564413   29532 certs.go:256] generating profile certs ...
	I0723 14:13:25.564476   29532 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:13:25.564493   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt with IP's: []
	I0723 14:13:25.700047   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt ...
	I0723 14:13:25.700087   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt: {Name:mkdba522527eda92ff71cd385739078b14c4da31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:25.700291   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key ...
	I0723 14:13:25.700306   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key: {Name:mk57a69bd0df653423e3606733f06b485248df4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:25.700421   29532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19
	I0723 14:13:25.700450   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.254]
	I0723 14:13:26.126470   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19 ...
	I0723 14:13:26.126520   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19: {Name:mka663770b2d6e465e2b11b311dd3ec7a6e75761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.126726   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19 ...
	I0723 14:13:26.126747   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19: {Name:mk89e7bb911a6fd02eb0dfe171c83292d64d8626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.126852   29532 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:13:26.126945   29532 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:13:26.127003   29532 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:13:26.127020   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt with IP's: []
	I0723 14:13:26.185627   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt ...
	I0723 14:13:26.185657   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt: {Name:mk0404f7330cbad6dd18ebcf21636895af066fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.185836   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key ...
	I0723 14:13:26.185849   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key: {Name:mk5e155bbb1610feeadaca4f2dff9a332eedfeec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.185939   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:13:26.185958   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:13:26.185969   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:13:26.185980   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:13:26.185989   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:13:26.185999   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:13:26.186008   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:13:26.186016   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:13:26.186063   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:13:26.186100   29532 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:13:26.186109   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:13:26.186129   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:13:26.186151   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:13:26.186172   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:13:26.186208   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:13:26.186233   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.186248   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.186260   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.186750   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:13:26.210325   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:13:26.241496   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:13:26.265608   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:13:26.288171   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 14:13:26.313195   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:13:26.334438   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:13:26.355720   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:13:26.376950   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:13:26.398642   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:13:26.419904   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:13:26.441271   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:13:26.456295   29532 ssh_runner.go:195] Run: openssl version
	I0723 14:13:26.461735   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:13:26.471578   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.475622   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.475682   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.481057   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:13:26.490898   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:13:26.501742   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.505963   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.506010   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.511324   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:13:26.521234   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:13:26.531053   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.534968   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.535016   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.540072   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:13:26.549832   29532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:13:26.553382   29532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:13:26.553440   29532 kubeadm.go:392] StartCluster: {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:13:26.553509   29532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:13:26.553572   29532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:13:26.591209   29532 cri.go:89] found id: ""
	I0723 14:13:26.591290   29532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 14:13:26.600612   29532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 14:13:26.609774   29532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 14:13:26.618689   29532 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 14:13:26.618706   29532 kubeadm.go:157] found existing configuration files:
	
	I0723 14:13:26.618748   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 14:13:26.627038   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 14:13:26.627085   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 14:13:26.635723   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 14:13:26.643970   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 14:13:26.644025   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 14:13:26.652491   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 14:13:26.660797   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 14:13:26.660839   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 14:13:26.669068   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 14:13:26.676901   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 14:13:26.676951   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 14:13:26.685421   29532 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 14:13:26.789759   29532 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 14:13:26.789852   29532 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 14:13:26.900787   29532 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 14:13:26.900881   29532 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 14:13:26.900970   29532 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 14:13:27.091115   29532 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 14:13:27.203393   29532 out.go:204]   - Generating certificates and keys ...
	I0723 14:13:27.203560   29532 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 14:13:27.203643   29532 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 14:13:27.395577   29532 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 14:13:27.650739   29532 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 14:13:27.745494   29532 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 14:13:27.944713   29532 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 14:13:28.063008   29532 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 14:13:28.063169   29532 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-533645 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0723 14:13:28.209317   29532 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 14:13:28.209435   29532 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-533645 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0723 14:13:28.283585   29532 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 14:13:28.432664   29532 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 14:13:28.562553   29532 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 14:13:28.562811   29532 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 14:13:28.732219   29532 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 14:13:28.812903   29532 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 14:13:28.892698   29532 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 14:13:28.971458   29532 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 14:13:29.155999   29532 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 14:13:29.156504   29532 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 14:13:29.159037   29532 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 14:13:29.160699   29532 out.go:204]   - Booting up control plane ...
	I0723 14:13:29.160829   29532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 14:13:29.160932   29532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 14:13:29.161386   29532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 14:13:29.182816   29532 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 14:13:29.183752   29532 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 14:13:29.183838   29532 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 14:13:29.304883   29532 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 14:13:29.305002   29532 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 14:13:30.305295   29532 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001177191s
	I0723 14:13:30.305434   29532 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 14:13:36.075937   29532 kubeadm.go:310] [api-check] The API server is healthy after 5.773933875s
	I0723 14:13:36.089267   29532 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 14:13:36.106915   29532 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 14:13:36.139368   29532 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 14:13:36.139600   29532 kubeadm.go:310] [mark-control-plane] Marking the node ha-533645 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 14:13:36.151222   29532 kubeadm.go:310] [bootstrap-token] Using token: r8wrz6.fvv9w307l0rufqz8
	I0723 14:13:36.152654   29532 out.go:204]   - Configuring RBAC rules ...
	I0723 14:13:36.152802   29532 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 14:13:36.162332   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 14:13:36.169997   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 14:13:36.173840   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 14:13:36.180187   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 14:13:36.184136   29532 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 14:13:36.485391   29532 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 14:13:36.923963   29532 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 14:13:37.486074   29532 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 14:13:37.487129   29532 kubeadm.go:310] 
	I0723 14:13:37.487198   29532 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 14:13:37.487211   29532 kubeadm.go:310] 
	I0723 14:13:37.487280   29532 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 14:13:37.487287   29532 kubeadm.go:310] 
	I0723 14:13:37.487355   29532 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 14:13:37.487433   29532 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 14:13:37.487486   29532 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 14:13:37.487492   29532 kubeadm.go:310] 
	I0723 14:13:37.487539   29532 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 14:13:37.487546   29532 kubeadm.go:310] 
	I0723 14:13:37.487584   29532 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 14:13:37.487590   29532 kubeadm.go:310] 
	I0723 14:13:37.487631   29532 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 14:13:37.487697   29532 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 14:13:37.487778   29532 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 14:13:37.487797   29532 kubeadm.go:310] 
	I0723 14:13:37.487909   29532 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 14:13:37.488010   29532 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 14:13:37.488021   29532 kubeadm.go:310] 
	I0723 14:13:37.488119   29532 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r8wrz6.fvv9w307l0rufqz8 \
	I0723 14:13:37.488213   29532 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 14:13:37.488236   29532 kubeadm.go:310] 	--control-plane 
	I0723 14:13:37.488242   29532 kubeadm.go:310] 
	I0723 14:13:37.488319   29532 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 14:13:37.488326   29532 kubeadm.go:310] 
	I0723 14:13:37.488398   29532 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r8wrz6.fvv9w307l0rufqz8 \
	I0723 14:13:37.488487   29532 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 14:13:37.489160   29532 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 14:13:37.489185   29532 cni.go:84] Creating CNI manager for ""
	I0723 14:13:37.489195   29532 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0723 14:13:37.491817   29532 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0723 14:13:37.493187   29532 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0723 14:13:37.498133   29532 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0723 14:13:37.498151   29532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0723 14:13:37.517082   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0723 14:13:37.872313   29532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 14:13:37.872462   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-533645 minikube.k8s.io/updated_at=2024_07_23T14_13_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=ha-533645 minikube.k8s.io/primary=true
	I0723 14:13:37.872466   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:37.900159   29532 ops.go:34] apiserver oom_adj: -16
	I0723 14:13:38.047799   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:38.548348   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:39.047929   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:39.548745   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:40.048202   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:40.548107   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:41.048459   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:41.548430   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:42.048252   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:42.548693   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:43.048154   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:43.548461   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:44.048672   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:44.547962   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:45.048566   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:45.548044   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:46.048781   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:46.548869   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:47.047972   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:47.548625   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:48.048437   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:48.548297   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:49.048503   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:49.547915   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:49.629806   29532 kubeadm.go:1113] duration metric: took 11.757413084s to wait for elevateKubeSystemPrivileges
	I0723 14:13:49.629847   29532 kubeadm.go:394] duration metric: took 23.076409381s to StartCluster
	I0723 14:13:49.629870   29532 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:49.629959   29532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:13:49.630823   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:49.631055   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0723 14:13:49.631067   29532 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 14:13:49.631127   29532 addons.go:69] Setting storage-provisioner=true in profile "ha-533645"
	I0723 14:13:49.631140   29532 addons.go:69] Setting default-storageclass=true in profile "ha-533645"
	I0723 14:13:49.631158   29532 addons.go:234] Setting addon storage-provisioner=true in "ha-533645"
	I0723 14:13:49.631186   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:13:49.631053   29532 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:13:49.631299   29532 start.go:241] waiting for startup goroutines ...
	I0723 14:13:49.631185   29532 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-533645"
	I0723 14:13:49.631277   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:13:49.631603   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.631638   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.631661   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.631687   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.646860   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0723 14:13:49.646873   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I0723 14:13:49.647284   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.647345   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.647824   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.647840   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.648000   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.648024   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.648296   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.648338   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.648459   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:49.648863   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.648891   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.650782   29532 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:13:49.651118   29532 kapi.go:59] client config for ha-533645: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt", KeyFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key", CAFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0723 14:13:49.651675   29532 cert_rotation.go:137] Starting client certificate rotation controller
	I0723 14:13:49.651919   29532 addons.go:234] Setting addon default-storageclass=true in "ha-533645"
	I0723 14:13:49.651970   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:13:49.652341   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.652379   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.664300   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
	I0723 14:13:49.664780   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.665339   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.665363   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.665744   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.666023   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:49.667762   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0723 14:13:49.667903   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:49.668110   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.668552   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.668575   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.669003   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.669465   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.669487   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.669767   29532 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 14:13:49.671199   29532 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:13:49.671213   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 14:13:49.671225   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:49.674005   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.674363   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:49.674402   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.674639   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:49.674837   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:49.675004   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:49.675164   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:49.684886   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0723 14:13:49.685571   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.686108   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.686127   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.686479   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.686728   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:49.688380   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:49.688613   29532 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 14:13:49.688632   29532 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 14:13:49.688651   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:49.691491   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.691911   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:49.691938   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.692072   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:49.692258   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:49.692395   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:49.692576   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:49.801260   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0723 14:13:49.812232   29532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:13:49.856436   29532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 14:13:50.350278   29532 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0723 14:13:50.557552   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.557580   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.557586   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.557602   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.557891   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.557924   29532 main.go:141] libmachine: (ha-533645) DBG | Closing plugin on server side
	I0723 14:13:50.557946   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.557958   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.557966   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.557921   29532 main.go:141] libmachine: (ha-533645) DBG | Closing plugin on server side
	I0723 14:13:50.557929   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.558014   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.558025   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.558034   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.558194   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.558210   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.558234   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.558247   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.558324   29532 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0723 14:13:50.558331   29532 round_trippers.go:469] Request Headers:
	I0723 14:13:50.558341   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:13:50.558350   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:13:50.571348   29532 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0723 14:13:50.571850   29532 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0723 14:13:50.571866   29532 round_trippers.go:469] Request Headers:
	I0723 14:13:50.571873   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:13:50.571877   29532 round_trippers.go:473]     Content-Type: application/json
	I0723 14:13:50.571881   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:13:50.574245   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:13:50.574374   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.574400   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.574662   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.574678   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.574679   29532 main.go:141] libmachine: (ha-533645) DBG | Closing plugin on server side
	I0723 14:13:50.576415   29532 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0723 14:13:50.577704   29532 addons.go:510] duration metric: took 946.631825ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0723 14:13:50.577738   29532 start.go:246] waiting for cluster config update ...
	I0723 14:13:50.577753   29532 start.go:255] writing updated cluster config ...
	I0723 14:13:50.579300   29532 out.go:177] 
	I0723 14:13:50.580697   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:13:50.580759   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:13:50.582326   29532 out.go:177] * Starting "ha-533645-m02" control-plane node in "ha-533645" cluster
	I0723 14:13:50.583632   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:13:50.583658   29532 cache.go:56] Caching tarball of preloaded images
	I0723 14:13:50.583743   29532 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:13:50.583754   29532 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:13:50.583809   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:13:50.583954   29532 start.go:360] acquireMachinesLock for ha-533645-m02: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:13:50.583991   29532 start.go:364] duration metric: took 20.534µs to acquireMachinesLock for "ha-533645-m02"
	I0723 14:13:50.584006   29532 start.go:93] Provisioning new machine with config: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:13:50.584071   29532 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0723 14:13:50.585666   29532 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 14:13:50.585738   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:50.585763   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:50.600326   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0723 14:13:50.600701   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:50.601155   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:50.601174   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:50.601486   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:50.601727   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:13:50.601908   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:13:50.602159   29532 start.go:159] libmachine.API.Create for "ha-533645" (driver="kvm2")
	I0723 14:13:50.602185   29532 client.go:168] LocalClient.Create starting
	I0723 14:13:50.602223   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 14:13:50.602293   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:13:50.602316   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:13:50.602403   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 14:13:50.602436   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:13:50.602450   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:13:50.602477   29532 main.go:141] libmachine: Running pre-create checks...
	I0723 14:13:50.602491   29532 main.go:141] libmachine: (ha-533645-m02) Calling .PreCreateCheck
	I0723 14:13:50.602678   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetConfigRaw
	I0723 14:13:50.603159   29532 main.go:141] libmachine: Creating machine...
	I0723 14:13:50.603177   29532 main.go:141] libmachine: (ha-533645-m02) Calling .Create
	I0723 14:13:50.603303   29532 main.go:141] libmachine: (ha-533645-m02) Creating KVM machine...
	I0723 14:13:50.604684   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found existing default KVM network
	I0723 14:13:50.604792   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found existing private KVM network mk-ha-533645
	I0723 14:13:50.604913   29532 main.go:141] libmachine: (ha-533645-m02) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02 ...
	I0723 14:13:50.604942   29532 main.go:141] libmachine: (ha-533645-m02) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 14:13:50.605005   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:50.604919   29960 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:13:50.605147   29532 main.go:141] libmachine: (ha-533645-m02) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 14:13:50.847352   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:50.847207   29960 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa...
	I0723 14:13:51.162927   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:51.162819   29960 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/ha-533645-m02.rawdisk...
	I0723 14:13:51.162960   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Writing magic tar header
	I0723 14:13:51.162971   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Writing SSH key tar header
	I0723 14:13:51.162983   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:51.162934   29960 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02 ...
	I0723 14:13:51.163125   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02
	I0723 14:13:51.163143   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02 (perms=drwx------)
	I0723 14:13:51.163151   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 14:13:51.163162   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:13:51.163172   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 14:13:51.163184   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 14:13:51.163194   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins
	I0723 14:13:51.163203   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home
	I0723 14:13:51.163215   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Skipping /home - not owner
	I0723 14:13:51.163226   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 14:13:51.163237   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 14:13:51.163244   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 14:13:51.163257   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 14:13:51.163270   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 14:13:51.163283   29532 main.go:141] libmachine: (ha-533645-m02) Creating domain...
	I0723 14:13:51.164259   29532 main.go:141] libmachine: (ha-533645-m02) define libvirt domain using xml: 
	I0723 14:13:51.164278   29532 main.go:141] libmachine: (ha-533645-m02) <domain type='kvm'>
	I0723 14:13:51.164288   29532 main.go:141] libmachine: (ha-533645-m02)   <name>ha-533645-m02</name>
	I0723 14:13:51.164312   29532 main.go:141] libmachine: (ha-533645-m02)   <memory unit='MiB'>2200</memory>
	I0723 14:13:51.164321   29532 main.go:141] libmachine: (ha-533645-m02)   <vcpu>2</vcpu>
	I0723 14:13:51.164332   29532 main.go:141] libmachine: (ha-533645-m02)   <features>
	I0723 14:13:51.164340   29532 main.go:141] libmachine: (ha-533645-m02)     <acpi/>
	I0723 14:13:51.164349   29532 main.go:141] libmachine: (ha-533645-m02)     <apic/>
	I0723 14:13:51.164376   29532 main.go:141] libmachine: (ha-533645-m02)     <pae/>
	I0723 14:13:51.164403   29532 main.go:141] libmachine: (ha-533645-m02)     
	I0723 14:13:51.164417   29532 main.go:141] libmachine: (ha-533645-m02)   </features>
	I0723 14:13:51.164433   29532 main.go:141] libmachine: (ha-533645-m02)   <cpu mode='host-passthrough'>
	I0723 14:13:51.164442   29532 main.go:141] libmachine: (ha-533645-m02)   
	I0723 14:13:51.164448   29532 main.go:141] libmachine: (ha-533645-m02)   </cpu>
	I0723 14:13:51.164453   29532 main.go:141] libmachine: (ha-533645-m02)   <os>
	I0723 14:13:51.164460   29532 main.go:141] libmachine: (ha-533645-m02)     <type>hvm</type>
	I0723 14:13:51.164465   29532 main.go:141] libmachine: (ha-533645-m02)     <boot dev='cdrom'/>
	I0723 14:13:51.164472   29532 main.go:141] libmachine: (ha-533645-m02)     <boot dev='hd'/>
	I0723 14:13:51.164479   29532 main.go:141] libmachine: (ha-533645-m02)     <bootmenu enable='no'/>
	I0723 14:13:51.164488   29532 main.go:141] libmachine: (ha-533645-m02)   </os>
	I0723 14:13:51.164507   29532 main.go:141] libmachine: (ha-533645-m02)   <devices>
	I0723 14:13:51.164519   29532 main.go:141] libmachine: (ha-533645-m02)     <disk type='file' device='cdrom'>
	I0723 14:13:51.164527   29532 main.go:141] libmachine: (ha-533645-m02)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/boot2docker.iso'/>
	I0723 14:13:51.164533   29532 main.go:141] libmachine: (ha-533645-m02)       <target dev='hdc' bus='scsi'/>
	I0723 14:13:51.164538   29532 main.go:141] libmachine: (ha-533645-m02)       <readonly/>
	I0723 14:13:51.164548   29532 main.go:141] libmachine: (ha-533645-m02)     </disk>
	I0723 14:13:51.164555   29532 main.go:141] libmachine: (ha-533645-m02)     <disk type='file' device='disk'>
	I0723 14:13:51.164563   29532 main.go:141] libmachine: (ha-533645-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 14:13:51.164571   29532 main.go:141] libmachine: (ha-533645-m02)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/ha-533645-m02.rawdisk'/>
	I0723 14:13:51.164578   29532 main.go:141] libmachine: (ha-533645-m02)       <target dev='hda' bus='virtio'/>
	I0723 14:13:51.164583   29532 main.go:141] libmachine: (ha-533645-m02)     </disk>
	I0723 14:13:51.164591   29532 main.go:141] libmachine: (ha-533645-m02)     <interface type='network'>
	I0723 14:13:51.164597   29532 main.go:141] libmachine: (ha-533645-m02)       <source network='mk-ha-533645'/>
	I0723 14:13:51.164603   29532 main.go:141] libmachine: (ha-533645-m02)       <model type='virtio'/>
	I0723 14:13:51.164609   29532 main.go:141] libmachine: (ha-533645-m02)     </interface>
	I0723 14:13:51.164619   29532 main.go:141] libmachine: (ha-533645-m02)     <interface type='network'>
	I0723 14:13:51.164624   29532 main.go:141] libmachine: (ha-533645-m02)       <source network='default'/>
	I0723 14:13:51.164631   29532 main.go:141] libmachine: (ha-533645-m02)       <model type='virtio'/>
	I0723 14:13:51.164637   29532 main.go:141] libmachine: (ha-533645-m02)     </interface>
	I0723 14:13:51.164642   29532 main.go:141] libmachine: (ha-533645-m02)     <serial type='pty'>
	I0723 14:13:51.164647   29532 main.go:141] libmachine: (ha-533645-m02)       <target port='0'/>
	I0723 14:13:51.164654   29532 main.go:141] libmachine: (ha-533645-m02)     </serial>
	I0723 14:13:51.164660   29532 main.go:141] libmachine: (ha-533645-m02)     <console type='pty'>
	I0723 14:13:51.164667   29532 main.go:141] libmachine: (ha-533645-m02)       <target type='serial' port='0'/>
	I0723 14:13:51.164672   29532 main.go:141] libmachine: (ha-533645-m02)     </console>
	I0723 14:13:51.164684   29532 main.go:141] libmachine: (ha-533645-m02)     <rng model='virtio'>
	I0723 14:13:51.164690   29532 main.go:141] libmachine: (ha-533645-m02)       <backend model='random'>/dev/random</backend>
	I0723 14:13:51.164697   29532 main.go:141] libmachine: (ha-533645-m02)     </rng>
	I0723 14:13:51.164702   29532 main.go:141] libmachine: (ha-533645-m02)     
	I0723 14:13:51.164706   29532 main.go:141] libmachine: (ha-533645-m02)     
	I0723 14:13:51.164711   29532 main.go:141] libmachine: (ha-533645-m02)   </devices>
	I0723 14:13:51.164715   29532 main.go:141] libmachine: (ha-533645-m02) </domain>
	I0723 14:13:51.164752   29532 main.go:141] libmachine: (ha-533645-m02) 
	I0723 14:13:51.171811   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:72:7a:52 in network default
	I0723 14:13:51.172363   29532 main.go:141] libmachine: (ha-533645-m02) Ensuring networks are active...
	I0723 14:13:51.172381   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:51.173135   29532 main.go:141] libmachine: (ha-533645-m02) Ensuring network default is active
	I0723 14:13:51.173472   29532 main.go:141] libmachine: (ha-533645-m02) Ensuring network mk-ha-533645 is active
	I0723 14:13:51.173838   29532 main.go:141] libmachine: (ha-533645-m02) Getting domain xml...
	I0723 14:13:51.174609   29532 main.go:141] libmachine: (ha-533645-m02) Creating domain...
	I0723 14:13:52.396694   29532 main.go:141] libmachine: (ha-533645-m02) Waiting to get IP...
	I0723 14:13:52.397454   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:52.397864   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:52.397892   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:52.397857   29960 retry.go:31] will retry after 291.455513ms: waiting for machine to come up
	I0723 14:13:52.691665   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:52.692234   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:52.692259   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:52.692186   29960 retry.go:31] will retry after 276.688811ms: waiting for machine to come up
	I0723 14:13:52.970744   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:52.971146   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:52.971175   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:52.971098   29960 retry.go:31] will retry after 321.108369ms: waiting for machine to come up
	I0723 14:13:53.294049   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:53.294465   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:53.294496   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:53.294421   29960 retry.go:31] will retry after 579.782128ms: waiting for machine to come up
	I0723 14:13:53.876292   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:53.876738   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:53.876765   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:53.876696   29960 retry.go:31] will retry after 533.186824ms: waiting for machine to come up
	I0723 14:13:54.411515   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:54.411942   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:54.411964   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:54.411913   29960 retry.go:31] will retry after 659.951767ms: waiting for machine to come up
	I0723 14:13:55.073839   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:55.074392   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:55.074426   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:55.074328   29960 retry.go:31] will retry after 915.678094ms: waiting for machine to come up
	I0723 14:13:55.991449   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:55.991897   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:55.991926   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:55.991848   29960 retry.go:31] will retry after 1.130153568s: waiting for machine to come up
	I0723 14:13:57.124226   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:57.124793   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:57.124821   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:57.124744   29960 retry.go:31] will retry after 1.350718893s: waiting for machine to come up
	I0723 14:13:58.477352   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:58.477782   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:58.477805   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:58.477740   29960 retry.go:31] will retry after 2.162424933s: waiting for machine to come up
	I0723 14:14:00.642131   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:00.642561   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:00.642587   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:00.642529   29960 retry.go:31] will retry after 1.904873624s: waiting for machine to come up
	I0723 14:14:02.548616   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:02.549141   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:02.549171   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:02.549073   29960 retry.go:31] will retry after 2.896313096s: waiting for machine to come up
	I0723 14:14:05.449196   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:05.449740   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:05.449767   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:05.449703   29960 retry.go:31] will retry after 4.145626381s: waiting for machine to come up
	I0723 14:14:09.599382   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:09.599737   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:09.599760   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:09.599691   29960 retry.go:31] will retry after 3.465080003s: waiting for machine to come up
	I0723 14:14:13.067839   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.068249   29532 main.go:141] libmachine: (ha-533645-m02) Found IP for machine: 192.168.39.182
	I0723 14:14:13.068274   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has current primary IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.068282   29532 main.go:141] libmachine: (ha-533645-m02) Reserving static IP address...
	I0723 14:14:13.068684   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find host DHCP lease matching {name: "ha-533645-m02", mac: "52:54:00:a0:97:d5", ip: "192.168.39.182"} in network mk-ha-533645
	I0723 14:14:13.138284   29532 main.go:141] libmachine: (ha-533645-m02) Reserved static IP address: 192.168.39.182
	I0723 14:14:13.138317   29532 main.go:141] libmachine: (ha-533645-m02) Waiting for SSH to be available...
	I0723 14:14:13.138327   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Getting to WaitForSSH function...
	I0723 14:14:13.141165   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.141569   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.141598   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.141774   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Using SSH client type: external
	I0723 14:14:13.141800   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa (-rw-------)
	I0723 14:14:13.141828   29532 main.go:141] libmachine: (ha-533645-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 14:14:13.141841   29532 main.go:141] libmachine: (ha-533645-m02) DBG | About to run SSH command:
	I0723 14:14:13.141855   29532 main.go:141] libmachine: (ha-533645-m02) DBG | exit 0
	I0723 14:14:13.266560   29532 main.go:141] libmachine: (ha-533645-m02) DBG | SSH cmd err, output: <nil>: 
	I0723 14:14:13.266861   29532 main.go:141] libmachine: (ha-533645-m02) KVM machine creation complete!
	I0723 14:14:13.267104   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetConfigRaw
	I0723 14:14:13.267679   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:13.267903   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:13.268102   29532 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 14:14:13.268116   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:14:13.269460   29532 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 14:14:13.269473   29532 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 14:14:13.269478   29532 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 14:14:13.269485   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.271813   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.272192   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.272219   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.272354   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.272509   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.272665   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.272786   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.272980   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.273160   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.273176   29532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 14:14:13.381609   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:14:13.381631   29532 main.go:141] libmachine: Detecting the provisioner...
	I0723 14:14:13.381638   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.384393   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.384736   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.384765   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.384918   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.385127   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.385361   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.385593   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.385776   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.386028   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.386048   29532 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 14:14:13.494780   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 14:14:13.494834   29532 main.go:141] libmachine: found compatible host: buildroot
	I0723 14:14:13.494841   29532 main.go:141] libmachine: Provisioning with buildroot...
	I0723 14:14:13.494849   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:14:13.495134   29532 buildroot.go:166] provisioning hostname "ha-533645-m02"
	I0723 14:14:13.495166   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:14:13.495365   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.498165   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.498614   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.498642   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.498791   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.498929   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.499081   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.499192   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.499333   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.499478   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.499491   29532 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645-m02 && echo "ha-533645-m02" | sudo tee /etc/hostname
	I0723 14:14:13.620479   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645-m02
	
	I0723 14:14:13.620506   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.623524   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.623854   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.623878   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.624047   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.624242   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.624397   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.624548   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.624723   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.624920   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.624938   29532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:14:13.738808   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:14:13.738830   29532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:14:13.738844   29532 buildroot.go:174] setting up certificates
	I0723 14:14:13.738854   29532 provision.go:84] configureAuth start
	I0723 14:14:13.738862   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:14:13.739159   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:13.741541   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.741917   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.741942   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.742108   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.744426   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.744774   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.744791   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.744960   29532 provision.go:143] copyHostCerts
	I0723 14:14:13.744988   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:14:13.745022   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:14:13.745035   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:14:13.745108   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:14:13.745217   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:14:13.745242   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:14:13.745250   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:14:13.745285   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:14:13.745349   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:14:13.745372   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:14:13.745381   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:14:13.745414   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:14:13.745476   29532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645-m02 san=[127.0.0.1 192.168.39.182 ha-533645-m02 localhost minikube]
	I0723 14:14:13.978917   29532 provision.go:177] copyRemoteCerts
	I0723 14:14:13.978974   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:14:13.978995   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.981686   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.982008   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.982038   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.982268   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.982483   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.982661   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.982822   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.064211   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:14:14.064274   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:14:14.087261   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:14:14.087349   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 14:14:14.109351   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:14:14.109428   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 14:14:14.131249   29532 provision.go:87] duration metric: took 392.38503ms to configureAuth
	I0723 14:14:14.131274   29532 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:14:14.131449   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:14:14.131511   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.134184   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.134589   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.134618   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.134772   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.134967   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.135154   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.135294   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.135463   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:14.135654   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:14.135670   29532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:14:14.396639   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:14:14.396671   29532 main.go:141] libmachine: Checking connection to Docker...
	I0723 14:14:14.396682   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetURL
	I0723 14:14:14.398000   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Using libvirt version 6000000
	I0723 14:14:14.400069   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.400435   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.400461   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.400643   29532 main.go:141] libmachine: Docker is up and running!
	I0723 14:14:14.400665   29532 main.go:141] libmachine: Reticulating splines...
	I0723 14:14:14.400673   29532 client.go:171] duration metric: took 23.798481003s to LocalClient.Create
	I0723 14:14:14.400693   29532 start.go:167] duration metric: took 23.798536032s to libmachine.API.Create "ha-533645"
	I0723 14:14:14.400703   29532 start.go:293] postStartSetup for "ha-533645-m02" (driver="kvm2")
	I0723 14:14:14.400715   29532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:14:14.400740   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.400983   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:14:14.401004   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.402975   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.403300   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.403327   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.403514   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.403695   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.403845   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.403980   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.489386   29532 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:14:14.493473   29532 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:14:14.493496   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:14:14.493567   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:14:14.493636   29532 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:14:14.493645   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:14:14.493719   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:14:14.502656   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:14:14.524105   29532 start.go:296] duration metric: took 123.388205ms for postStartSetup
	I0723 14:14:14.524151   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetConfigRaw
	I0723 14:14:14.524729   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:14.527071   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.527484   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.527511   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.527748   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:14:14.527926   29532 start.go:128] duration metric: took 23.943845027s to createHost
	I0723 14:14:14.527948   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.529894   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.530255   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.530281   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.530512   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.530712   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.530871   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.531025   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.531275   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:14.531427   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:14.531437   29532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:14:14.639058   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744054.616220575
	
	I0723 14:14:14.639082   29532 fix.go:216] guest clock: 1721744054.616220575
	I0723 14:14:14.639131   29532 fix.go:229] Guest: 2024-07-23 14:14:14.616220575 +0000 UTC Remote: 2024-07-23 14:14:14.527937381 +0000 UTC m=+75.888805407 (delta=88.283194ms)
	I0723 14:14:14.639157   29532 fix.go:200] guest clock delta is within tolerance: 88.283194ms
	I0723 14:14:14.639165   29532 start.go:83] releasing machines lock for "ha-533645-m02", held for 24.055164779s
	I0723 14:14:14.639187   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.639458   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:14.641765   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.642062   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.642089   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.644382   29532 out.go:177] * Found network options:
	I0723 14:14:14.645811   29532 out.go:177]   - NO_PROXY=192.168.39.103
	W0723 14:14:14.646900   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:14:14.646929   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.647393   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.647568   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.647655   29532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:14:14.647703   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	W0723 14:14:14.647801   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:14:14.647872   29532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:14:14.647893   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.650400   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.650654   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.650820   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.650846   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.650991   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.651012   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.651018   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.651198   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.651229   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.651375   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.651378   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.651528   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.651527   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.651675   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.880669   29532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:14:14.886780   29532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:14:14.886840   29532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:14:14.901863   29532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 14:14:14.901889   29532 start.go:495] detecting cgroup driver to use...
	I0723 14:14:14.901942   29532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:14:14.918281   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:14:14.932298   29532 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:14:14.932370   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:14:14.945919   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:14:14.960255   29532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:14:15.099840   29532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:14:15.247026   29532 docker.go:233] disabling docker service ...
	I0723 14:14:15.247105   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:14:15.261726   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:14:15.275008   29532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:14:15.413571   29532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:14:15.545731   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:14:15.558812   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:14:15.576442   29532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:14:15.576511   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.586249   29532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:14:15.586315   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.595885   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.606494   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.616503   29532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:14:15.626527   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.636291   29532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.651849   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.661721   29532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:14:15.670999   29532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 14:14:15.671064   29532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 14:14:15.683748   29532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:14:15.692463   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:14:15.826299   29532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:14:15.963799   29532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:14:15.963867   29532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:14:15.968898   29532 start.go:563] Will wait 60s for crictl version
	I0723 14:14:15.968960   29532 ssh_runner.go:195] Run: which crictl
	I0723 14:14:15.972395   29532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:14:16.014002   29532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:14:16.014084   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:14:16.041646   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:14:16.071891   29532 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:14:16.073468   29532 out.go:177]   - env NO_PROXY=192.168.39.103
	I0723 14:14:16.074794   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:16.077996   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:16.078474   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:16.078503   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:16.078713   29532 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:14:16.082668   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:14:16.094157   29532 mustload.go:65] Loading cluster: ha-533645
	I0723 14:14:16.094392   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:14:16.094788   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:14:16.094827   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:14:16.109938   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0723 14:14:16.110421   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:14:16.110893   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:14:16.110913   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:14:16.111282   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:14:16.111518   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:14:16.113111   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:14:16.113400   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:14:16.113429   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:14:16.127908   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I0723 14:14:16.128363   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:14:16.128829   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:14:16.128852   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:14:16.129140   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:14:16.129377   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:14:16.129547   29532 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.182
	I0723 14:14:16.129559   29532 certs.go:194] generating shared ca certs ...
	I0723 14:14:16.129571   29532 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:14:16.129684   29532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:14:16.129721   29532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:14:16.129727   29532 certs.go:256] generating profile certs ...
	I0723 14:14:16.129786   29532 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:14:16.129810   29532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93
	I0723 14:14:16.129822   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.182 192.168.39.254]
	I0723 14:14:16.240824   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93 ...
	I0723 14:14:16.240856   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93: {Name:mkf9d33d57e4f2ae7e43ba01e73119266f40336d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:14:16.241018   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93 ...
	I0723 14:14:16.241030   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93: {Name:mk6277f2ca8f2772f186f6bb140a40234df422b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:14:16.241099   29532 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:14:16.241226   29532 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:14:16.241346   29532 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:14:16.241361   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:14:16.241373   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:14:16.241385   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:14:16.241395   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:14:16.241407   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:14:16.241420   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:14:16.241432   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:14:16.241444   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:14:16.241488   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:14:16.241519   29532 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:14:16.241528   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:14:16.241549   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:14:16.241569   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:14:16.241590   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:14:16.241631   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:14:16.241656   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.241671   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.241682   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.241712   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:14:16.244564   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:16.245153   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:14:16.245181   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:16.245351   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:14:16.245561   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:14:16.245693   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:14:16.245836   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:14:16.322863   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0723 14:14:16.327760   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0723 14:14:16.338466   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0723 14:14:16.342374   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0723 14:14:16.352176   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0723 14:14:16.356217   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0723 14:14:16.365902   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0723 14:14:16.369997   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0723 14:14:16.380157   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0723 14:14:16.384290   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0723 14:14:16.395742   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0723 14:14:16.400204   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0723 14:14:16.412059   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:14:16.436169   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:14:16.461338   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:14:16.484637   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:14:16.507795   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0723 14:14:16.529936   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:14:16.553134   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:14:16.576152   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:14:16.604070   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:14:16.626957   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:14:16.648213   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:14:16.669192   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0723 14:14:16.683778   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0723 14:14:16.698487   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0723 14:14:16.713322   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0723 14:14:16.728095   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0723 14:14:16.742720   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0723 14:14:16.757481   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0723 14:14:16.773001   29532 ssh_runner.go:195] Run: openssl version
	I0723 14:14:16.778525   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:14:16.788682   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.792836   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.792904   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.798288   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:14:16.808807   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:14:16.818637   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.822738   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.822776   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.827819   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:14:16.837444   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:14:16.847316   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.851435   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.851492   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.856731   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:14:16.866308   29532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:14:16.869873   29532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:14:16.869929   29532 kubeadm.go:934] updating node {m02 192.168.39.182 8443 v1.30.3 crio true true} ...
	I0723 14:14:16.870021   29532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:14:16.870049   29532 kube-vip.go:115] generating kube-vip config ...
	I0723 14:14:16.870084   29532 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:14:16.886068   29532 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:14:16.886136   29532 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:14:16.886200   29532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:14:16.895349   29532 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0723 14:14:16.895407   29532 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0723 14:14:16.906476   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0723 14:14:16.906508   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:14:16.906544   29532 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0723 14:14:16.906599   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:14:16.906643   29532 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0723 14:14:16.910632   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0723 14:14:16.910674   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0723 14:14:29.349175   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:14:29.349257   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:14:29.354160   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0723 14:14:29.354194   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0723 14:14:42.245752   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:14:42.260722   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:14:42.260826   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:14:42.264854   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0723 14:14:42.264884   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0723 14:14:42.628617   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0723 14:14:42.637383   29532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0723 14:14:42.653188   29532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:14:42.668190   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0723 14:14:42.682869   29532 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:14:42.686308   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:14:42.697199   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:14:42.805737   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:14:42.821471   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:14:42.821937   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:14:42.821976   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:14:42.836985   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37557
	I0723 14:14:42.837466   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:14:42.837978   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:14:42.838003   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:14:42.838280   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:14:42.838489   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:14:42.838659   29532 start.go:317] joinCluster: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:14:42.838750   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0723 14:14:42.838765   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:14:42.841670   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:42.842072   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:14:42.842085   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:42.842312   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:14:42.842521   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:14:42.842702   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:14:42.842885   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:14:43.002055   29532 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:14:43.002102   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7ycf2e.biroaztat8xgm11s --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m02 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443"
	I0723 14:15:05.497769   29532 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7ycf2e.biroaztat8xgm11s --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m02 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443": (22.495634398s)
	I0723 14:15:05.497814   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0723 14:15:06.028807   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-533645-m02 minikube.k8s.io/updated_at=2024_07_23T14_15_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=ha-533645 minikube.k8s.io/primary=false
	I0723 14:15:06.176625   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-533645-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0723 14:15:06.285185   29532 start.go:319] duration metric: took 23.446521558s to joinCluster
	I0723 14:15:06.285272   29532 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:15:06.285577   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:06.286986   29532 out.go:177] * Verifying Kubernetes components...
	I0723 14:15:06.288823   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:15:06.512358   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:15:06.552647   29532 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:15:06.552860   29532 kapi.go:59] client config for ha-533645: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt", KeyFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key", CAFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0723 14:15:06.552913   29532 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.103:8443
	I0723 14:15:06.553076   29532 node_ready.go:35] waiting up to 6m0s for node "ha-533645-m02" to be "Ready" ...
	I0723 14:15:06.553154   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:06.553163   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:06.553170   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:06.553175   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:06.564244   29532 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0723 14:15:07.053315   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:07.053336   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:07.053344   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:07.053346   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:07.062284   29532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0723 14:15:07.553707   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:07.553727   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:07.553736   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:07.553740   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:07.556934   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:08.053376   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:08.053396   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:08.053404   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:08.053409   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:08.056563   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:08.553550   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:08.553574   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:08.553581   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:08.553588   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:08.556881   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:08.557500   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:09.053608   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:09.053628   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:09.053636   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:09.053640   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:09.056661   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:09.553509   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:09.553530   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:09.553538   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:09.553542   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:09.556768   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:10.053832   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:10.053853   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:10.053860   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:10.053863   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:10.057580   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:10.553837   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:10.553857   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:10.553865   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:10.553869   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:10.558249   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:15:10.559156   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:11.054143   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:11.054163   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:11.054175   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:11.054181   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:11.068256   29532 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0723 14:15:11.554175   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:11.554194   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:11.554201   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:11.554204   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:11.557457   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:12.053540   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:12.053565   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:12.053572   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:12.053576   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:12.056752   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:12.553600   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:12.553621   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:12.553630   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:12.553635   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:12.557709   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:15:13.053642   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:13.053662   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:13.053673   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:13.053680   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:13.057427   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:13.058173   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:13.553601   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:13.553626   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:13.553637   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:13.553643   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:13.556870   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:14.053505   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:14.053528   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:14.053534   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:14.053538   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:14.057100   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:14.554147   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:14.554169   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:14.554177   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:14.554182   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:14.557559   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:15.053439   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:15.053461   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:15.053469   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:15.053476   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:15.057015   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:15.553398   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:15.553419   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:15.553426   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:15.553429   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:15.556910   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:15.557403   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:16.053333   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:16.053355   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:16.053364   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:16.053369   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:16.056997   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:16.554116   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:16.554157   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:16.554168   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:16.554173   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:16.557450   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:17.053458   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:17.053480   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:17.053488   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:17.053491   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:17.058211   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:15:17.553611   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:17.553633   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:17.553640   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:17.553643   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:17.557491   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:17.558562   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:18.053352   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:18.053373   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:18.053381   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:18.053386   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:18.057023   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:18.553374   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:18.553394   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:18.553402   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:18.553405   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:18.556503   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:19.053385   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:19.053412   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:19.053423   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:19.053429   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:19.056635   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:19.553607   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:19.553642   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:19.553656   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:19.553661   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:19.557131   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:20.054066   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:20.054088   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:20.054096   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:20.054102   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:20.057285   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:20.057829   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:20.554189   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:20.554210   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:20.554218   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:20.554223   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:20.557832   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:21.053561   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:21.053586   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:21.053594   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:21.053599   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:21.056689   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:21.553511   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:21.553532   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:21.553540   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:21.553544   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:21.556479   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.053389   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:22.053411   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.053419   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.053424   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.057335   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.058158   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:22.553487   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:22.553510   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.553519   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.553525   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.556650   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.557345   29532 node_ready.go:49] node "ha-533645-m02" has status "Ready":"True"
	I0723 14:15:22.557361   29532 node_ready.go:38] duration metric: took 16.004270893s for node "ha-533645-m02" to be "Ready" ...
	I0723 14:15:22.557369   29532 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:15:22.557440   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:22.557448   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.557455   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.557460   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.563161   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:15:22.571822   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.571900   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nrvbf
	I0723 14:15:22.571909   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.571917   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.571923   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.574992   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.575535   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.575552   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.575562   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.575567   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.578041   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.578615   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.578632   29532 pod_ready.go:81] duration metric: took 6.781836ms for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.578640   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.578695   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s6xzz
	I0723 14:15:22.578703   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.578710   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.578716   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.581333   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.581849   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.581863   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.581870   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.581874   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.584576   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.585055   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.585076   29532 pod_ready.go:81] duration metric: took 6.428477ms for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.585088   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.585142   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645
	I0723 14:15:22.585153   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.585162   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.585172   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.587839   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.588446   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.588462   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.588472   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.588477   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.590757   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.591153   29532 pod_ready.go:92] pod "etcd-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.591168   29532 pod_ready.go:81] duration metric: took 6.073744ms for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.591175   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.591218   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m02
	I0723 14:15:22.591225   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.591231   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.591235   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.594527   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.595556   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:22.595580   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.595587   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.595590   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.598114   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.598946   29532 pod_ready.go:92] pod "etcd-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.598963   29532 pod_ready.go:81] duration metric: took 7.781381ms for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.598975   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.754360   29532 request.go:629] Waited for 155.3269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:15:22.754437   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:15:22.754446   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.754465   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.754489   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.757940   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.954303   29532 request.go:629] Waited for 195.464577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.954362   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.954370   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.954388   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.954393   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.957713   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.958542   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.958560   29532 pod_ready.go:81] duration metric: took 359.578068ms for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.958576   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.153796   29532 request.go:629] Waited for 195.144154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:15:23.153868   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:15:23.153876   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.153886   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.153892   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.156933   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.354034   29532 request.go:629] Waited for 196.349856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:23.354081   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:23.354086   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.354093   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.354096   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.357388   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.358259   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:23.358279   29532 pod_ready.go:81] duration metric: took 399.695547ms for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.358288   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.554434   29532 request.go:629] Waited for 196.043801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:15:23.554498   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:15:23.554506   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.554517   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.554525   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.558177   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.754141   29532 request.go:629] Waited for 195.143969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:23.754192   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:23.754197   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.754205   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.754209   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.757663   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.758349   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:23.758365   29532 pod_ready.go:81] duration metric: took 400.070197ms for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.758388   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.953895   29532 request.go:629] Waited for 195.443343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:15:23.953947   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:15:23.953952   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.953959   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.953965   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.957273   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.154252   29532 request.go:629] Waited for 196.38583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.154326   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.154335   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.154345   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.154351   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.157728   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.158235   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:24.158251   29532 pod_ready.go:81] duration metric: took 399.855851ms for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.158261   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.354422   29532 request.go:629] Waited for 196.077704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:15:24.354478   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:15:24.354483   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.354490   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.354494   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.357783   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.553987   29532 request.go:629] Waited for 195.349961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:24.554065   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:24.554073   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.554082   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.554087   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.557585   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.558375   29532 pod_ready.go:92] pod "kube-proxy-9wh4w" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:24.558406   29532 pod_ready.go:81] duration metric: took 400.138962ms for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.558415   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.754546   29532 request.go:629] Waited for 196.071606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:15:24.754624   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:15:24.754631   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.754641   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.754648   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.758475   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.954351   29532 request.go:629] Waited for 195.353695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.954440   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.954471   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.954483   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.954488   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.957901   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.958672   29532 pod_ready.go:92] pod "kube-proxy-p25cg" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:24.958692   29532 pod_ready.go:81] duration metric: took 400.271263ms for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.958701   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.153810   29532 request.go:629] Waited for 195.044638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:15:25.153904   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:15:25.153915   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.153926   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.153936   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.157074   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:25.353933   29532 request.go:629] Waited for 196.378685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:25.354009   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:25.354016   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.354024   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.354031   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.356883   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:25.357372   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:25.357394   29532 pod_ready.go:81] duration metric: took 398.68599ms for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.357408   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.554489   29532 request.go:629] Waited for 197.006685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:15:25.554577   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:15:25.554587   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.554598   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.554604   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.558571   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:25.753964   29532 request.go:629] Waited for 194.510316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:25.754021   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:25.754026   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.754034   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.754038   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.757353   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:25.757786   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:25.757804   29532 pod_ready.go:81] duration metric: took 400.387585ms for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.757819   29532 pod_ready.go:38] duration metric: took 3.200422142s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:15:25.757843   29532 api_server.go:52] waiting for apiserver process to appear ...
	I0723 14:15:25.757902   29532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:15:25.772757   29532 api_server.go:72] duration metric: took 19.487449649s to wait for apiserver process to appear ...
	I0723 14:15:25.772781   29532 api_server.go:88] waiting for apiserver healthz status ...
	I0723 14:15:25.772797   29532 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0723 14:15:25.776956   29532 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0723 14:15:25.777014   29532 round_trippers.go:463] GET https://192.168.39.103:8443/version
	I0723 14:15:25.777020   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.777034   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.777043   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.777964   29532 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0723 14:15:25.778048   29532 api_server.go:141] control plane version: v1.30.3
	I0723 14:15:25.778062   29532 api_server.go:131] duration metric: took 5.275939ms to wait for apiserver health ...
	I0723 14:15:25.778068   29532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 14:15:25.954458   29532 request.go:629] Waited for 176.335463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:25.954525   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:25.954531   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.954539   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.954543   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.959586   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:15:25.963823   29532 system_pods.go:59] 17 kube-system pods found
	I0723 14:15:25.963848   29532 system_pods.go:61] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:15:25.963852   29532 system_pods.go:61] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:15:25.963856   29532 system_pods.go:61] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:15:25.963860   29532 system_pods.go:61] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:15:25.963864   29532 system_pods.go:61] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:15:25.963868   29532 system_pods.go:61] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:15:25.963871   29532 system_pods.go:61] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:15:25.963875   29532 system_pods.go:61] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:15:25.963878   29532 system_pods.go:61] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:15:25.963882   29532 system_pods.go:61] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:15:25.963887   29532 system_pods.go:61] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:15:25.963890   29532 system_pods.go:61] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:15:25.963896   29532 system_pods.go:61] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:15:25.963900   29532 system_pods.go:61] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:15:25.963905   29532 system_pods.go:61] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:15:25.963908   29532 system_pods.go:61] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:15:25.963913   29532 system_pods.go:61] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:15:25.963919   29532 system_pods.go:74] duration metric: took 185.845925ms to wait for pod list to return data ...
	I0723 14:15:25.963928   29532 default_sa.go:34] waiting for default service account to be created ...
	I0723 14:15:26.153552   29532 request.go:629] Waited for 189.561602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:15:26.153613   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:15:26.153619   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:26.153628   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:26.153638   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:26.157078   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:26.157313   29532 default_sa.go:45] found service account: "default"
	I0723 14:15:26.157331   29532 default_sa.go:55] duration metric: took 193.397665ms for default service account to be created ...
	I0723 14:15:26.157339   29532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 14:15:26.353699   29532 request.go:629] Waited for 196.295451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:26.353751   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:26.353756   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:26.353763   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:26.353766   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:26.358912   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:15:26.364015   29532 system_pods.go:86] 17 kube-system pods found
	I0723 14:15:26.364040   29532 system_pods.go:89] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:15:26.364047   29532 system_pods.go:89] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:15:26.364053   29532 system_pods.go:89] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:15:26.364059   29532 system_pods.go:89] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:15:26.364064   29532 system_pods.go:89] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:15:26.364069   29532 system_pods.go:89] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:15:26.364075   29532 system_pods.go:89] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:15:26.364081   29532 system_pods.go:89] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:15:26.364090   29532 system_pods.go:89] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:15:26.364100   29532 system_pods.go:89] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:15:26.364106   29532 system_pods.go:89] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:15:26.364112   29532 system_pods.go:89] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:15:26.364120   29532 system_pods.go:89] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:15:26.364128   29532 system_pods.go:89] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:15:26.364136   29532 system_pods.go:89] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:15:26.364142   29532 system_pods.go:89] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:15:26.364148   29532 system_pods.go:89] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:15:26.364159   29532 system_pods.go:126] duration metric: took 206.814001ms to wait for k8s-apps to be running ...
	I0723 14:15:26.364171   29532 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 14:15:26.364220   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:15:26.378922   29532 system_svc.go:56] duration metric: took 14.740952ms WaitForService to wait for kubelet
	I0723 14:15:26.378954   29532 kubeadm.go:582] duration metric: took 20.093650935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:15:26.378973   29532 node_conditions.go:102] verifying NodePressure condition ...
	I0723 14:15:26.554375   29532 request.go:629] Waited for 175.329684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes
	I0723 14:15:26.554473   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes
	I0723 14:15:26.554481   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:26.554490   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:26.554496   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:26.558473   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:26.559158   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:15:26.559182   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:15:26.559197   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:15:26.559202   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:15:26.559207   29532 node_conditions.go:105] duration metric: took 180.230463ms to run NodePressure ...
	I0723 14:15:26.559220   29532 start.go:241] waiting for startup goroutines ...
	I0723 14:15:26.559249   29532 start.go:255] writing updated cluster config ...
	I0723 14:15:26.561275   29532 out.go:177] 
	I0723 14:15:26.562673   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:26.562784   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:15:26.564481   29532 out.go:177] * Starting "ha-533645-m03" control-plane node in "ha-533645" cluster
	I0723 14:15:26.565768   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:15:26.565799   29532 cache.go:56] Caching tarball of preloaded images
	I0723 14:15:26.565893   29532 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:15:26.565904   29532 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:15:26.565986   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:15:26.566151   29532 start.go:360] acquireMachinesLock for ha-533645-m03: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:15:26.566192   29532 start.go:364] duration metric: took 22.445µs to acquireMachinesLock for "ha-533645-m03"
	I0723 14:15:26.566206   29532 start.go:93] Provisioning new machine with config: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:15:26.566323   29532 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0723 14:15:26.567992   29532 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 14:15:26.568078   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:26.568111   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:26.583205   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I0723 14:15:26.583743   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:26.584212   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:26.584230   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:26.584540   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:26.584713   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:26.584827   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:26.584930   29532 start.go:159] libmachine.API.Create for "ha-533645" (driver="kvm2")
	I0723 14:15:26.584955   29532 client.go:168] LocalClient.Create starting
	I0723 14:15:26.584983   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 14:15:26.585019   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:15:26.585033   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:15:26.585078   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 14:15:26.585094   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:15:26.585102   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:15:26.585118   29532 main.go:141] libmachine: Running pre-create checks...
	I0723 14:15:26.585126   29532 main.go:141] libmachine: (ha-533645-m03) Calling .PreCreateCheck
	I0723 14:15:26.585334   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetConfigRaw
	I0723 14:15:26.585749   29532 main.go:141] libmachine: Creating machine...
	I0723 14:15:26.585763   29532 main.go:141] libmachine: (ha-533645-m03) Calling .Create
	I0723 14:15:26.585874   29532 main.go:141] libmachine: (ha-533645-m03) Creating KVM machine...
	I0723 14:15:26.587216   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found existing default KVM network
	I0723 14:15:26.587421   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found existing private KVM network mk-ha-533645
	I0723 14:15:26.587535   29532 main.go:141] libmachine: (ha-533645-m03) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03 ...
	I0723 14:15:26.587558   29532 main.go:141] libmachine: (ha-533645-m03) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 14:15:26.587657   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:26.587547   30443 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:15:26.587721   29532 main.go:141] libmachine: (ha-533645-m03) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 14:15:26.820566   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:26.820456   30443 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa...
	I0723 14:15:27.015161   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:27.015020   30443 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/ha-533645-m03.rawdisk...
	I0723 14:15:27.015198   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Writing magic tar header
	I0723 14:15:27.015216   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Writing SSH key tar header
	I0723 14:15:27.015234   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:27.015138   30443 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03 ...
	I0723 14:15:27.015252   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03
	I0723 14:15:27.015319   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03 (perms=drwx------)
	I0723 14:15:27.015344   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 14:15:27.015355   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 14:15:27.015373   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 14:15:27.015385   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 14:15:27.015399   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 14:15:27.015412   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 14:15:27.015425   29532 main.go:141] libmachine: (ha-533645-m03) Creating domain...
	I0723 14:15:27.015439   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:15:27.015451   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 14:15:27.015463   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 14:15:27.015473   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins
	I0723 14:15:27.015510   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home
	I0723 14:15:27.015536   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Skipping /home - not owner
	I0723 14:15:27.016417   29532 main.go:141] libmachine: (ha-533645-m03) define libvirt domain using xml: 
	I0723 14:15:27.016438   29532 main.go:141] libmachine: (ha-533645-m03) <domain type='kvm'>
	I0723 14:15:27.016446   29532 main.go:141] libmachine: (ha-533645-m03)   <name>ha-533645-m03</name>
	I0723 14:15:27.016455   29532 main.go:141] libmachine: (ha-533645-m03)   <memory unit='MiB'>2200</memory>
	I0723 14:15:27.016462   29532 main.go:141] libmachine: (ha-533645-m03)   <vcpu>2</vcpu>
	I0723 14:15:27.016470   29532 main.go:141] libmachine: (ha-533645-m03)   <features>
	I0723 14:15:27.016482   29532 main.go:141] libmachine: (ha-533645-m03)     <acpi/>
	I0723 14:15:27.016489   29532 main.go:141] libmachine: (ha-533645-m03)     <apic/>
	I0723 14:15:27.016498   29532 main.go:141] libmachine: (ha-533645-m03)     <pae/>
	I0723 14:15:27.016504   29532 main.go:141] libmachine: (ha-533645-m03)     
	I0723 14:15:27.016517   29532 main.go:141] libmachine: (ha-533645-m03)   </features>
	I0723 14:15:27.016527   29532 main.go:141] libmachine: (ha-533645-m03)   <cpu mode='host-passthrough'>
	I0723 14:15:27.016552   29532 main.go:141] libmachine: (ha-533645-m03)   
	I0723 14:15:27.016573   29532 main.go:141] libmachine: (ha-533645-m03)   </cpu>
	I0723 14:15:27.016585   29532 main.go:141] libmachine: (ha-533645-m03)   <os>
	I0723 14:15:27.016596   29532 main.go:141] libmachine: (ha-533645-m03)     <type>hvm</type>
	I0723 14:15:27.016609   29532 main.go:141] libmachine: (ha-533645-m03)     <boot dev='cdrom'/>
	I0723 14:15:27.016620   29532 main.go:141] libmachine: (ha-533645-m03)     <boot dev='hd'/>
	I0723 14:15:27.016634   29532 main.go:141] libmachine: (ha-533645-m03)     <bootmenu enable='no'/>
	I0723 14:15:27.016648   29532 main.go:141] libmachine: (ha-533645-m03)   </os>
	I0723 14:15:27.016658   29532 main.go:141] libmachine: (ha-533645-m03)   <devices>
	I0723 14:15:27.016668   29532 main.go:141] libmachine: (ha-533645-m03)     <disk type='file' device='cdrom'>
	I0723 14:15:27.016685   29532 main.go:141] libmachine: (ha-533645-m03)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/boot2docker.iso'/>
	I0723 14:15:27.016697   29532 main.go:141] libmachine: (ha-533645-m03)       <target dev='hdc' bus='scsi'/>
	I0723 14:15:27.016709   29532 main.go:141] libmachine: (ha-533645-m03)       <readonly/>
	I0723 14:15:27.016723   29532 main.go:141] libmachine: (ha-533645-m03)     </disk>
	I0723 14:15:27.016739   29532 main.go:141] libmachine: (ha-533645-m03)     <disk type='file' device='disk'>
	I0723 14:15:27.016751   29532 main.go:141] libmachine: (ha-533645-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 14:15:27.016765   29532 main.go:141] libmachine: (ha-533645-m03)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/ha-533645-m03.rawdisk'/>
	I0723 14:15:27.016776   29532 main.go:141] libmachine: (ha-533645-m03)       <target dev='hda' bus='virtio'/>
	I0723 14:15:27.016788   29532 main.go:141] libmachine: (ha-533645-m03)     </disk>
	I0723 14:15:27.016803   29532 main.go:141] libmachine: (ha-533645-m03)     <interface type='network'>
	I0723 14:15:27.016816   29532 main.go:141] libmachine: (ha-533645-m03)       <source network='mk-ha-533645'/>
	I0723 14:15:27.016831   29532 main.go:141] libmachine: (ha-533645-m03)       <model type='virtio'/>
	I0723 14:15:27.016842   29532 main.go:141] libmachine: (ha-533645-m03)     </interface>
	I0723 14:15:27.016850   29532 main.go:141] libmachine: (ha-533645-m03)     <interface type='network'>
	I0723 14:15:27.016863   29532 main.go:141] libmachine: (ha-533645-m03)       <source network='default'/>
	I0723 14:15:27.016878   29532 main.go:141] libmachine: (ha-533645-m03)       <model type='virtio'/>
	I0723 14:15:27.016890   29532 main.go:141] libmachine: (ha-533645-m03)     </interface>
	I0723 14:15:27.016901   29532 main.go:141] libmachine: (ha-533645-m03)     <serial type='pty'>
	I0723 14:15:27.016913   29532 main.go:141] libmachine: (ha-533645-m03)       <target port='0'/>
	I0723 14:15:27.016923   29532 main.go:141] libmachine: (ha-533645-m03)     </serial>
	I0723 14:15:27.016934   29532 main.go:141] libmachine: (ha-533645-m03)     <console type='pty'>
	I0723 14:15:27.016942   29532 main.go:141] libmachine: (ha-533645-m03)       <target type='serial' port='0'/>
	I0723 14:15:27.016953   29532 main.go:141] libmachine: (ha-533645-m03)     </console>
	I0723 14:15:27.016965   29532 main.go:141] libmachine: (ha-533645-m03)     <rng model='virtio'>
	I0723 14:15:27.016977   29532 main.go:141] libmachine: (ha-533645-m03)       <backend model='random'>/dev/random</backend>
	I0723 14:15:27.016989   29532 main.go:141] libmachine: (ha-533645-m03)     </rng>
	I0723 14:15:27.016999   29532 main.go:141] libmachine: (ha-533645-m03)     
	I0723 14:15:27.017028   29532 main.go:141] libmachine: (ha-533645-m03)     
	I0723 14:15:27.017053   29532 main.go:141] libmachine: (ha-533645-m03)   </devices>
	I0723 14:15:27.017070   29532 main.go:141] libmachine: (ha-533645-m03) </domain>
	I0723 14:15:27.017074   29532 main.go:141] libmachine: (ha-533645-m03) 
	I0723 14:15:27.023268   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:bb:e8:b3 in network default
	I0723 14:15:27.023910   29532 main.go:141] libmachine: (ha-533645-m03) Ensuring networks are active...
	I0723 14:15:27.023941   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:27.024595   29532 main.go:141] libmachine: (ha-533645-m03) Ensuring network default is active
	I0723 14:15:27.024936   29532 main.go:141] libmachine: (ha-533645-m03) Ensuring network mk-ha-533645 is active
	I0723 14:15:27.025445   29532 main.go:141] libmachine: (ha-533645-m03) Getting domain xml...
	I0723 14:15:27.026306   29532 main.go:141] libmachine: (ha-533645-m03) Creating domain...
	I0723 14:15:28.248436   29532 main.go:141] libmachine: (ha-533645-m03) Waiting to get IP...
	I0723 14:15:28.249334   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:28.249733   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:28.249769   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:28.249722   30443 retry.go:31] will retry after 281.606831ms: waiting for machine to come up
	I0723 14:15:28.533482   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:28.534008   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:28.534030   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:28.533963   30443 retry.go:31] will retry after 385.152438ms: waiting for machine to come up
	I0723 14:15:28.920341   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:28.920872   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:28.920948   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:28.920792   30443 retry.go:31] will retry after 314.271869ms: waiting for machine to come up
	I0723 14:15:29.237053   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:29.237520   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:29.237550   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:29.237465   30443 retry.go:31] will retry after 471.988519ms: waiting for machine to come up
	I0723 14:15:29.711227   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:29.711743   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:29.711772   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:29.711695   30443 retry.go:31] will retry after 531.270874ms: waiting for machine to come up
	I0723 14:15:30.244371   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:30.244942   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:30.244970   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:30.244888   30443 retry.go:31] will retry after 770.53841ms: waiting for machine to come up
	I0723 14:15:31.016673   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:31.017006   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:31.017031   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:31.016973   30443 retry.go:31] will retry after 1.095715583s: waiting for machine to come up
	I0723 14:15:32.114498   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:32.115005   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:32.115035   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:32.114945   30443 retry.go:31] will retry after 1.280623697s: waiting for machine to come up
	I0723 14:15:33.397394   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:33.397826   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:33.397854   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:33.397779   30443 retry.go:31] will retry after 1.57925116s: waiting for machine to come up
	I0723 14:15:34.979429   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:34.979891   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:34.979929   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:34.979857   30443 retry.go:31] will retry after 1.686989757s: waiting for machine to come up
	I0723 14:15:36.668556   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:36.669180   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:36.669210   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:36.669127   30443 retry.go:31] will retry after 1.847102849s: waiting for machine to come up
	I0723 14:15:38.519171   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:38.519617   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:38.519670   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:38.519596   30443 retry.go:31] will retry after 2.787631648s: waiting for machine to come up
	I0723 14:15:41.308418   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:41.308777   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:41.308806   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:41.308742   30443 retry.go:31] will retry after 4.132953626s: waiting for machine to come up
	I0723 14:15:45.444189   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:45.444716   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:45.444747   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:45.444649   30443 retry.go:31] will retry after 4.976181345s: waiting for machine to come up
	I0723 14:15:50.425349   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.425885   29532 main.go:141] libmachine: (ha-533645-m03) Found IP for machine: 192.168.39.127
	I0723 14:15:50.425911   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has current primary IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.425919   29532 main.go:141] libmachine: (ha-533645-m03) Reserving static IP address...
	I0723 14:15:50.426246   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find host DHCP lease matching {name: "ha-533645-m03", mac: "52:54:00:76:92:af", ip: "192.168.39.127"} in network mk-ha-533645
	I0723 14:15:50.499815   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Getting to WaitForSSH function...
	I0723 14:15:50.499850   29532 main.go:141] libmachine: (ha-533645-m03) Reserved static IP address: 192.168.39.127
	I0723 14:15:50.499868   29532 main.go:141] libmachine: (ha-533645-m03) Waiting for SSH to be available...
	I0723 14:15:50.502999   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.503508   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.503536   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.503700   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Using SSH client type: external
	I0723 14:15:50.503728   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa (-rw-------)
	I0723 14:15:50.503754   29532 main.go:141] libmachine: (ha-533645-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 14:15:50.503768   29532 main.go:141] libmachine: (ha-533645-m03) DBG | About to run SSH command:
	I0723 14:15:50.503779   29532 main.go:141] libmachine: (ha-533645-m03) DBG | exit 0
	I0723 14:15:50.626137   29532 main.go:141] libmachine: (ha-533645-m03) DBG | SSH cmd err, output: <nil>: 
	I0723 14:15:50.626421   29532 main.go:141] libmachine: (ha-533645-m03) KVM machine creation complete!
	I0723 14:15:50.626763   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetConfigRaw
	I0723 14:15:50.627266   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:50.627475   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:50.627653   29532 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 14:15:50.627674   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:15:50.629326   29532 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 14:15:50.629345   29532 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 14:15:50.629354   29532 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 14:15:50.629363   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.632139   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.632548   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.632574   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.632713   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.632887   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.633106   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.633257   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.633417   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.633656   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.633671   29532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 14:15:50.733471   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:15:50.733491   29532 main.go:141] libmachine: Detecting the provisioner...
	I0723 14:15:50.733499   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.736505   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.736855   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.736883   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.737066   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.737269   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.737489   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.737656   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.737816   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.737991   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.738005   29532 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 14:15:50.838923   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 14:15:50.838988   29532 main.go:141] libmachine: found compatible host: buildroot
	I0723 14:15:50.838995   29532 main.go:141] libmachine: Provisioning with buildroot...
	I0723 14:15:50.839002   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:50.839223   29532 buildroot.go:166] provisioning hostname "ha-533645-m03"
	I0723 14:15:50.839244   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:50.839440   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.841695   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.842032   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.842048   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.842232   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.842428   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.842574   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.842678   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.842863   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.843040   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.843056   29532 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645-m03 && echo "ha-533645-m03" | sudo tee /etc/hostname
	I0723 14:15:50.965435   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645-m03
	
	I0723 14:15:50.965460   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.968290   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.968712   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.968739   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.968981   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.969180   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.969364   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.969521   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.969692   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.969870   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.969891   29532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:15:51.079197   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:15:51.079221   29532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:15:51.079239   29532 buildroot.go:174] setting up certificates
	I0723 14:15:51.079249   29532 provision.go:84] configureAuth start
	I0723 14:15:51.079261   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:51.079532   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:51.082328   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.082845   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.082877   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.083066   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:51.085073   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.085410   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.085443   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.085609   29532 provision.go:143] copyHostCerts
	I0723 14:15:51.085644   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:15:51.085680   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:15:51.085692   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:15:51.085774   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:15:51.085866   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:15:51.085892   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:15:51.085902   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:15:51.085938   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:15:51.086005   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:15:51.086028   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:15:51.086036   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:15:51.086068   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:15:51.086136   29532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645-m03 san=[127.0.0.1 192.168.39.127 ha-533645-m03 localhost minikube]
	I0723 14:15:51.830193   29532 provision.go:177] copyRemoteCerts
	I0723 14:15:51.830248   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:15:51.830269   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:51.833287   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.833680   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.833708   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.833869   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:51.834069   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:51.834226   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:51.834352   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:51.917082   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:15:51.917158   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:15:51.943452   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:15:51.943522   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 14:15:51.966039   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:15:51.966108   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 14:15:51.988152   29532 provision.go:87] duration metric: took 908.889393ms to configureAuth
	I0723 14:15:51.988176   29532 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:15:51.988386   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:51.988464   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:51.991263   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.991654   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.991674   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.991863   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:51.992078   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:51.992242   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:51.992368   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:51.992526   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:51.992680   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:51.992695   29532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:15:52.259466   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:15:52.259515   29532 main.go:141] libmachine: Checking connection to Docker...
	I0723 14:15:52.259530   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetURL
	I0723 14:15:52.260794   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Using libvirt version 6000000
	I0723 14:15:52.263044   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.263453   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.263480   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.263670   29532 main.go:141] libmachine: Docker is up and running!
	I0723 14:15:52.263693   29532 main.go:141] libmachine: Reticulating splines...
	I0723 14:15:52.263700   29532 client.go:171] duration metric: took 25.678736772s to LocalClient.Create
	I0723 14:15:52.263720   29532 start.go:167] duration metric: took 25.678790025s to libmachine.API.Create "ha-533645"
	I0723 14:15:52.263729   29532 start.go:293] postStartSetup for "ha-533645-m03" (driver="kvm2")
	I0723 14:15:52.263738   29532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:15:52.263751   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.263963   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:15:52.263983   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:52.266402   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.266756   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.266781   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.266891   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.267086   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.267240   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.267374   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:52.348302   29532 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:15:52.352200   29532 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:15:52.352220   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:15:52.352280   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:15:52.352348   29532 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:15:52.352358   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:15:52.352435   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:15:52.361140   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:15:52.384321   29532 start.go:296] duration metric: took 120.578802ms for postStartSetup
	I0723 14:15:52.384391   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetConfigRaw
	I0723 14:15:52.385025   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:52.387835   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.388216   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.388242   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.388529   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:15:52.388732   29532 start.go:128] duration metric: took 25.822399136s to createHost
	I0723 14:15:52.388758   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:52.391279   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.391669   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.391694   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.391840   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.392029   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.392191   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.392397   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.392546   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:52.392727   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:52.392740   29532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:15:52.495009   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744152.474342471
	
	I0723 14:15:52.495027   29532 fix.go:216] guest clock: 1721744152.474342471
	I0723 14:15:52.495036   29532 fix.go:229] Guest: 2024-07-23 14:15:52.474342471 +0000 UTC Remote: 2024-07-23 14:15:52.388743425 +0000 UTC m=+173.749611455 (delta=85.599046ms)
	I0723 14:15:52.495054   29532 fix.go:200] guest clock delta is within tolerance: 85.599046ms
	I0723 14:15:52.495061   29532 start.go:83] releasing machines lock for "ha-533645-m03", held for 25.928862383s
	I0723 14:15:52.495079   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.495332   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:52.498049   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.498425   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.498451   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.500933   29532 out.go:177] * Found network options:
	I0723 14:15:52.502596   29532 out.go:177]   - NO_PROXY=192.168.39.103,192.168.39.182
	W0723 14:15:52.504006   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	W0723 14:15:52.504036   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:15:52.504052   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.504645   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.504857   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.504964   29532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:15:52.505003   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	W0723 14:15:52.505045   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	W0723 14:15:52.505071   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:15:52.505146   29532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:15:52.505169   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:52.508077   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508103   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508405   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.508430   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508456   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.508470   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508566   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.508774   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.508778   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.508964   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.508971   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.509158   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.509152   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:52.509324   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:52.744633   29532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:15:52.750636   29532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:15:52.750711   29532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:15:52.766518   29532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 14:15:52.766537   29532 start.go:495] detecting cgroup driver to use...
	I0723 14:15:52.766591   29532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:15:52.782045   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:15:52.794198   29532 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:15:52.794266   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:15:52.807618   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:15:52.820716   29532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:15:52.943937   29532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:15:53.078325   29532 docker.go:233] disabling docker service ...
	I0723 14:15:53.078412   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:15:53.092946   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:15:53.106364   29532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:15:53.237962   29532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:15:53.357033   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:15:53.371439   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:15:53.389103   29532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:15:53.389165   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.399173   29532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:15:53.399238   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.408720   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.418077   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.428104   29532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:15:53.437770   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.447301   29532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.463326   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.473778   29532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:15:53.482338   29532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 14:15:53.482415   29532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 14:15:53.494050   29532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:15:53.502660   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:15:53.615201   29532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:15:53.750921   29532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:15:53.750992   29532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:15:53.756801   29532 start.go:563] Will wait 60s for crictl version
	I0723 14:15:53.756862   29532 ssh_runner.go:195] Run: which crictl
	I0723 14:15:53.760286   29532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:15:53.795682   29532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:15:53.795748   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:15:53.825041   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:15:53.856964   29532 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:15:53.858485   29532 out.go:177]   - env NO_PROXY=192.168.39.103
	I0723 14:15:53.859757   29532 out.go:177]   - env NO_PROXY=192.168.39.103,192.168.39.182
	I0723 14:15:53.860814   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:53.863390   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:53.863860   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:53.863889   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:53.864075   29532 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:15:53.867881   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:15:53.879914   29532 mustload.go:65] Loading cluster: ha-533645
	I0723 14:15:53.880186   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:53.880561   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:53.880596   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:53.896041   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0723 14:15:53.896446   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:53.896856   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:53.896875   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:53.897194   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:53.897387   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:15:53.899415   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:15:53.899790   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:53.899834   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:53.914519   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I0723 14:15:53.914883   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:53.915342   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:53.915362   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:53.915645   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:53.915822   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:15:53.915963   29532 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.127
	I0723 14:15:53.915975   29532 certs.go:194] generating shared ca certs ...
	I0723 14:15:53.915993   29532 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:15:53.916110   29532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:15:53.916147   29532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:15:53.916155   29532 certs.go:256] generating profile certs ...
	I0723 14:15:53.916219   29532 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:15:53.916244   29532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3
	I0723 14:15:53.916254   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.182 192.168.39.127 192.168.39.254]
	I0723 14:15:54.010349   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3 ...
	I0723 14:15:54.010376   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3: {Name:mka157d08daeddba13fb0dc4d069c66ea442b999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:15:54.010596   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3 ...
	I0723 14:15:54.010614   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3: {Name:mkb672f50ec344593a19ac7e5590865fbf2b75c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:15:54.010689   29532 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:15:54.010819   29532 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:15:54.010939   29532 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:15:54.010954   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:15:54.010966   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:15:54.010976   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:15:54.010986   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:15:54.010995   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:15:54.011007   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:15:54.011020   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:15:54.011033   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:15:54.011078   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:15:54.011103   29532 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:15:54.011113   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:15:54.011132   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:15:54.011155   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:15:54.011176   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:15:54.011212   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:15:54.011237   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.011250   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.011262   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.011295   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:15:54.014849   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:54.015284   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:15:54.015320   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:54.015460   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:15:54.015691   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:15:54.015847   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:15:54.015989   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:15:54.098822   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0723 14:15:54.104673   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0723 14:15:54.116899   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0723 14:15:54.120808   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0723 14:15:54.130327   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0723 14:15:54.134004   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0723 14:15:54.143628   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0723 14:15:54.147461   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0723 14:15:54.157233   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0723 14:15:54.161305   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0723 14:15:54.171626   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0723 14:15:54.176014   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0723 14:15:54.186368   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:15:54.210240   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:15:54.235403   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:15:54.259299   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:15:54.282611   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0723 14:15:54.304014   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 14:15:54.325639   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:15:54.347678   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:15:54.369722   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:15:54.392787   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:15:54.420476   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:15:54.442992   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0723 14:15:54.459183   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0723 14:15:54.475286   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0723 14:15:54.491900   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0723 14:15:54.508182   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0723 14:15:54.524424   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0723 14:15:54.540801   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0723 14:15:54.556879   29532 ssh_runner.go:195] Run: openssl version
	I0723 14:15:54.562157   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:15:54.571962   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.576167   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.576211   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.582570   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:15:54.592778   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:15:54.603191   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.607659   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.607726   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.613448   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:15:54.624157   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:15:54.635881   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.641107   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.641177   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.646840   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:15:54.657916   29532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:15:54.662016   29532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:15:54.662062   29532 kubeadm.go:934] updating node {m03 192.168.39.127 8443 v1.30.3 crio true true} ...
	I0723 14:15:54.662147   29532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:15:54.662177   29532 kube-vip.go:115] generating kube-vip config ...
	I0723 14:15:54.662215   29532 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:15:54.679594   29532 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:15:54.679668   29532 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:15:54.679722   29532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:15:54.690369   29532 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0723 14:15:54.690436   29532 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0723 14:15:54.700621   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0723 14:15:54.700649   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:15:54.700653   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0723 14:15:54.700668   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0723 14:15:54.700681   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:15:54.700686   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:15:54.700718   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:15:54.700728   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:15:54.708166   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0723 14:15:54.708199   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0723 14:15:54.735401   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0723 14:15:54.735408   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:15:54.735452   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0723 14:15:54.735592   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:15:54.777898   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0723 14:15:54.777939   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0723 14:15:55.570945   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0723 14:15:55.580996   29532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0723 14:15:55.598116   29532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:15:55.615486   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0723 14:15:55.631180   29532 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:15:55.634776   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:15:55.648638   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:15:55.778734   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:15:55.795591   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:15:55.796076   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:55.796127   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:55.813989   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0723 14:15:55.814436   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:55.814929   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:55.814950   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:55.815292   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:55.815488   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:15:55.815637   29532 start.go:317] joinCluster: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:15:55.815752   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0723 14:15:55.815770   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:15:55.818827   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:55.819185   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:15:55.819212   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:55.819386   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:15:55.819580   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:15:55.819760   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:15:55.819945   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:15:55.978832   29532 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:15:55.978880   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1mxm0a.dzsiup6q6ovj1n1x --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0723 14:16:20.127146   29532 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1mxm0a.dzsiup6q6ovj1n1x --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (24.148216007s)
	I0723 14:16:20.127180   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0723 14:16:20.731680   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-533645-m03 minikube.k8s.io/updated_at=2024_07_23T14_16_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=ha-533645 minikube.k8s.io/primary=false
	I0723 14:16:20.857972   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-533645-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0723 14:16:20.972765   29532 start.go:319] duration metric: took 25.157124447s to joinCluster
	I0723 14:16:20.972861   29532 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:16:20.973197   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:16:20.974580   29532 out.go:177] * Verifying Kubernetes components...
	I0723 14:16:20.975841   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:16:21.239954   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:16:21.303330   29532 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:16:21.303668   29532 kapi.go:59] client config for ha-533645: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt", KeyFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key", CAFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0723 14:16:21.303741   29532 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.103:8443
	I0723 14:16:21.303972   29532 node_ready.go:35] waiting up to 6m0s for node "ha-533645-m03" to be "Ready" ...
	I0723 14:16:21.304045   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:21.304055   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:21.304065   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:21.304073   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:21.307424   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:21.804744   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:21.804766   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:21.804775   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:21.804778   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:21.808098   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:22.305126   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:22.305171   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:22.305183   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:22.305189   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:22.310070   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:22.804717   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:22.804737   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:22.804744   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:22.804748   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:22.807630   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:23.305068   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:23.305091   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:23.305099   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:23.305104   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:23.308317   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:23.309043   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:23.805058   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:23.805076   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:23.805084   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:23.805088   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:23.808929   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:24.305048   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:24.305068   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:24.305076   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:24.305081   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:24.308279   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:24.804928   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:24.804948   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:24.804956   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:24.804962   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:24.810954   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:16:25.305165   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:25.305189   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:25.305199   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:25.305205   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:25.308482   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:25.309105   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:25.804654   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:25.804674   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:25.804683   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:25.804688   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:25.808057   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:26.304413   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:26.304434   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:26.304445   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:26.304450   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:26.307908   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:26.804220   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:26.804242   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:26.804249   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:26.804253   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:26.807487   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:27.304218   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:27.304240   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:27.304250   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:27.304255   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:27.308253   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:27.309136   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:27.804426   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:27.804447   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:27.804457   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:27.804462   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:27.807997   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:28.304354   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:28.304373   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:28.304381   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:28.304385   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:28.307504   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:28.804135   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:28.804158   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:28.804168   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:28.804173   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:28.808162   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:29.305155   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:29.305176   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:29.305184   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:29.305187   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:29.308417   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:29.804230   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:29.804250   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:29.804258   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:29.804263   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:29.807379   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:29.807850   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:30.304219   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:30.304240   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:30.304249   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:30.304252   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:30.307960   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:30.804541   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:30.804563   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:30.804571   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:30.804575   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:30.809317   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:31.304988   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:31.305011   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:31.305021   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:31.305027   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:31.308405   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:31.804660   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:31.804681   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:31.804688   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:31.804692   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:31.808192   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:31.808803   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:32.304153   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:32.304185   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:32.304192   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:32.304196   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:32.307367   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:32.804344   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:32.804367   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:32.804376   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:32.804381   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:32.807377   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:33.304817   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:33.304839   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:33.304846   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:33.304851   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:33.308240   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:33.804976   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:33.804994   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:33.805002   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:33.805008   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:33.808620   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:33.809302   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:34.304479   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:34.304501   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:34.304511   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:34.304517   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:34.308172   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:34.804152   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:34.804179   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:34.804190   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:34.804196   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:34.807482   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:35.305018   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:35.305043   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:35.305055   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:35.305064   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:35.308408   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:35.804906   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:35.804930   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:35.804938   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:35.804943   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:35.808166   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:36.304597   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:36.304621   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:36.304633   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:36.304639   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:36.308300   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:36.309168   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:36.804950   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:36.804974   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:36.804986   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:36.804992   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:36.808398   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:37.304343   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:37.304366   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:37.304377   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:37.304385   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:37.308121   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:37.805070   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:37.805090   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:37.805100   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:37.805106   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:37.808319   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:38.304879   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:38.304902   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:38.304909   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:38.304914   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:38.308282   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:38.805003   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:38.805021   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:38.805029   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:38.805032   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:38.808215   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:38.808736   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:39.305053   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:39.305078   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.305088   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.305093   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.311321   29532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0723 14:16:39.311922   29532 node_ready.go:49] node "ha-533645-m03" has status "Ready":"True"
	I0723 14:16:39.311950   29532 node_ready.go:38] duration metric: took 18.007961675s for node "ha-533645-m03" to be "Ready" ...
	I0723 14:16:39.311961   29532 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:16:39.312035   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:39.312047   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.312056   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.312061   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.319892   29532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0723 14:16:39.326251   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.326338   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nrvbf
	I0723 14:16:39.326348   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.326355   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.326359   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.329926   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:39.330888   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:39.330905   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.330915   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.330920   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.333435   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.334052   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.334071   29532 pod_ready.go:81] duration metric: took 7.786961ms for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.334081   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.334146   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s6xzz
	I0723 14:16:39.334156   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.334168   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.334177   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.336908   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.337747   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:39.337761   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.337770   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.337776   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.340573   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.340926   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.340940   29532 pod_ready.go:81] duration metric: took 6.851025ms for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.340951   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.340996   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645
	I0723 14:16:39.341005   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.341015   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.341022   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.343119   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.343603   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:39.343615   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.343624   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.343629   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.346126   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.346601   29532 pod_ready.go:92] pod "etcd-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.346618   29532 pod_ready.go:81] duration metric: took 5.659492ms for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.346627   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.346684   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m02
	I0723 14:16:39.346693   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.346704   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.346711   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.348901   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.349327   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:39.349339   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.349348   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.349354   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.351431   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.351913   29532 pod_ready.go:92] pod "etcd-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.351928   29532 pod_ready.go:81] duration metric: took 5.293908ms for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.351938   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.505161   29532 request.go:629] Waited for 153.168219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m03
	I0723 14:16:39.505237   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m03
	I0723 14:16:39.505245   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.505257   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.505268   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.508805   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:39.705500   29532 request.go:629] Waited for 195.995675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:39.705579   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:39.705591   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.705599   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.705607   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.709091   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:39.710129   29532 pod_ready.go:92] pod "etcd-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.710148   29532 pod_ready.go:81] duration metric: took 358.203577ms for pod "etcd-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.710165   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.905285   29532 request.go:629] Waited for 195.046973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:16:39.905336   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:16:39.905341   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.905347   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.905350   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.908659   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.105745   29532 request.go:629] Waited for 196.382777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:40.105808   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:40.105814   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.105821   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.105825   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.109266   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.109811   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:40.109829   29532 pod_ready.go:81] duration metric: took 399.655068ms for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.109841   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.305908   29532 request.go:629] Waited for 195.988243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:16:40.305969   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:16:40.305977   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.305987   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.305994   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.309384   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.505684   29532 request.go:629] Waited for 195.400548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:40.505739   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:40.505744   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.505749   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.505753   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.509739   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.510452   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:40.510473   29532 pod_ready.go:81] duration metric: took 400.624465ms for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.510487   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.705996   29532 request.go:629] Waited for 195.443515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m03
	I0723 14:16:40.706051   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m03
	I0723 14:16:40.706057   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.706064   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.706069   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.709564   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.905699   29532 request.go:629] Waited for 195.294921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:40.905760   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:40.905767   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.905777   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.905782   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.909452   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.909944   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:40.909961   29532 pod_ready.go:81] duration metric: took 399.468318ms for pod "kube-apiserver-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.909971   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.105133   29532 request.go:629] Waited for 195.096233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:16:41.105213   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:16:41.105220   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.105229   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.105237   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.108964   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:41.305464   29532 request.go:629] Waited for 195.868451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:41.305532   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:41.305538   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.305546   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.305550   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.308775   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:41.309478   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:41.309496   29532 pod_ready.go:81] duration metric: took 399.518903ms for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.309505   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.505757   29532 request.go:629] Waited for 196.172173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:16:41.505829   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:16:41.505837   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.505849   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.505861   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.510166   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:41.705113   29532 request.go:629] Waited for 194.242853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:41.705184   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:41.705193   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.705206   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.705217   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.713605   29532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0723 14:16:41.714305   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:41.714329   29532 pod_ready.go:81] duration metric: took 404.816581ms for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.714343   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.905444   29532 request.go:629] Waited for 191.011459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m03
	I0723 14:16:41.905531   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m03
	I0723 14:16:41.905542   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.905557   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.905567   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.908965   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.106141   29532 request.go:629] Waited for 196.385763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:42.106193   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:42.106198   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.106206   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.106210   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.109483   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.109967   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:42.109983   29532 pod_ready.go:81] duration metric: took 395.632651ms for pod "kube-controller-manager-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.109991   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.305153   29532 request.go:629] Waited for 195.091701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:16:42.305204   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:16:42.305209   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.305216   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.305220   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.308537   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.505750   29532 request.go:629] Waited for 196.37531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:42.505809   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:42.505815   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.505826   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.505830   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.509049   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.509630   29532 pod_ready.go:92] pod "kube-proxy-9wh4w" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:42.509652   29532 pod_ready.go:81] duration metric: took 399.65434ms for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.509661   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.705846   29532 request.go:629] Waited for 196.113608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:16:42.705921   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:16:42.705930   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.705944   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.705951   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.709128   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.905073   29532 request.go:629] Waited for 195.264685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:42.905146   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:42.905151   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.905158   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.905162   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.908373   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.908958   29532 pod_ready.go:92] pod "kube-proxy-p25cg" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:42.908972   29532 pod_ready.go:81] duration metric: took 399.30612ms for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.908982   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsk2w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.105044   29532 request.go:629] Waited for 196.001396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsk2w
	I0723 14:16:43.105140   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsk2w
	I0723 14:16:43.105151   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.105160   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.105171   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.108726   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:43.306047   29532 request.go:629] Waited for 196.381996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:43.306102   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:43.306107   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.306122   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.306140   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.309423   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:43.309947   29532 pod_ready.go:92] pod "kube-proxy-xsk2w" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:43.309970   29532 pod_ready.go:81] duration metric: took 400.979959ms for pod "kube-proxy-xsk2w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.309981   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.506079   29532 request.go:629] Waited for 196.029634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:16:43.506131   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:16:43.506139   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.506147   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.506151   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.509315   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:43.706043   29532 request.go:629] Waited for 196.207662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:43.706105   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:43.706112   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.706121   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.706129   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.708973   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:43.709736   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:43.709751   29532 pod_ready.go:81] duration metric: took 399.764828ms for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.709759   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.905765   29532 request.go:629] Waited for 195.951609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:16:43.905822   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:16:43.905829   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.905839   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.905846   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.909170   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.105832   29532 request.go:629] Waited for 195.539296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:44.105904   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:44.105915   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.105926   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.105936   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.109197   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.109691   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:44.109706   29532 pod_ready.go:81] duration metric: took 399.940415ms for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:44.109714   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:44.305862   29532 request.go:629] Waited for 196.082514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m03
	I0723 14:16:44.305933   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m03
	I0723 14:16:44.305939   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.305947   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.305953   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.309634   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.505776   29532 request.go:629] Waited for 195.381264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:44.505825   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:44.505830   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.505840   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.505851   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.509375   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.509973   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:44.509995   29532 pod_ready.go:81] duration metric: took 400.274164ms for pod "kube-scheduler-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:44.510008   29532 pod_ready.go:38] duration metric: took 5.198035353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:16:44.510024   29532 api_server.go:52] waiting for apiserver process to appear ...
	I0723 14:16:44.510083   29532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:16:44.525393   29532 api_server.go:72] duration metric: took 23.552497184s to wait for apiserver process to appear ...
	I0723 14:16:44.525418   29532 api_server.go:88] waiting for apiserver healthz status ...
	I0723 14:16:44.525438   29532 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0723 14:16:44.529527   29532 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0723 14:16:44.529609   29532 round_trippers.go:463] GET https://192.168.39.103:8443/version
	I0723 14:16:44.529619   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.529631   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.529640   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.530449   29532 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0723 14:16:44.530529   29532 api_server.go:141] control plane version: v1.30.3
	I0723 14:16:44.530553   29532 api_server.go:131] duration metric: took 5.128474ms to wait for apiserver health ...
	I0723 14:16:44.530567   29532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 14:16:44.706031   29532 request.go:629] Waited for 175.341019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:44.706120   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:44.706128   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.706138   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.706148   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.713376   29532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0723 14:16:44.721248   29532 system_pods.go:59] 24 kube-system pods found
	I0723 14:16:44.721276   29532 system_pods.go:61] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:16:44.721282   29532 system_pods.go:61] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:16:44.721286   29532 system_pods.go:61] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:16:44.721302   29532 system_pods.go:61] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:16:44.721306   29532 system_pods.go:61] "etcd-ha-533645-m03" [3ec29d59-0196-4ebf-ac28-f70415297b7c] Running
	I0723 14:16:44.721309   29532 system_pods.go:61] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:16:44.721312   29532 system_pods.go:61] "kindnet-99qsf" [b7121912-e364-489d-ae7d-b762094fade9] Running
	I0723 14:16:44.721316   29532 system_pods.go:61] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:16:44.721322   29532 system_pods.go:61] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:16:44.721325   29532 system_pods.go:61] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:16:44.721328   29532 system_pods.go:61] "kube-apiserver-ha-533645-m03" [264831e9-6816-45a8-b917-ef003a6aefd8] Running
	I0723 14:16:44.721331   29532 system_pods.go:61] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:16:44.721337   29532 system_pods.go:61] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:16:44.721340   29532 system_pods.go:61] "kube-controller-manager-ha-533645-m03" [d3604797-9120-4668-93c6-8c5325f3854a] Running
	I0723 14:16:44.721346   29532 system_pods.go:61] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:16:44.721349   29532 system_pods.go:61] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:16:44.721352   29532 system_pods.go:61] "kube-proxy-xsk2w" [28febb11-2841-47d3-ae98-4f53347e568d] Running
	I0723 14:16:44.721355   29532 system_pods.go:61] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:16:44.721358   29532 system_pods.go:61] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:16:44.721362   29532 system_pods.go:61] "kube-scheduler-ha-533645-m03" [92b55f29-a3c2-418b-9575-b2a60e52ad62] Running
	I0723 14:16:44.721367   29532 system_pods.go:61] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:16:44.721369   29532 system_pods.go:61] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:16:44.721372   29532 system_pods.go:61] "kube-vip-ha-533645-m03" [ffece806-d630-4ffe-9a91-9c94311508f0] Running
	I0723 14:16:44.721375   29532 system_pods.go:61] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:16:44.721380   29532 system_pods.go:74] duration metric: took 190.805076ms to wait for pod list to return data ...
	I0723 14:16:44.721389   29532 default_sa.go:34] waiting for default service account to be created ...
	I0723 14:16:44.905823   29532 request.go:629] Waited for 184.361301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:16:44.905874   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:16:44.905879   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.905886   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.905890   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.909086   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.909204   29532 default_sa.go:45] found service account: "default"
	I0723 14:16:44.909219   29532 default_sa.go:55] duration metric: took 187.824123ms for default service account to be created ...
	I0723 14:16:44.909230   29532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 14:16:45.105653   29532 request.go:629] Waited for 196.356753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:45.105734   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:45.105742   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:45.105752   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:45.105760   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:45.113451   29532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0723 14:16:45.119771   29532 system_pods.go:86] 24 kube-system pods found
	I0723 14:16:45.119797   29532 system_pods.go:89] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:16:45.119803   29532 system_pods.go:89] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:16:45.119807   29532 system_pods.go:89] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:16:45.119811   29532 system_pods.go:89] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:16:45.119815   29532 system_pods.go:89] "etcd-ha-533645-m03" [3ec29d59-0196-4ebf-ac28-f70415297b7c] Running
	I0723 14:16:45.119819   29532 system_pods.go:89] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:16:45.119823   29532 system_pods.go:89] "kindnet-99qsf" [b7121912-e364-489d-ae7d-b762094fade9] Running
	I0723 14:16:45.119828   29532 system_pods.go:89] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:16:45.119832   29532 system_pods.go:89] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:16:45.119836   29532 system_pods.go:89] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:16:45.119842   29532 system_pods.go:89] "kube-apiserver-ha-533645-m03" [264831e9-6816-45a8-b917-ef003a6aefd8] Running
	I0723 14:16:45.119849   29532 system_pods.go:89] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:16:45.119854   29532 system_pods.go:89] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:16:45.119860   29532 system_pods.go:89] "kube-controller-manager-ha-533645-m03" [d3604797-9120-4668-93c6-8c5325f3854a] Running
	I0723 14:16:45.119866   29532 system_pods.go:89] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:16:45.119875   29532 system_pods.go:89] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:16:45.119881   29532 system_pods.go:89] "kube-proxy-xsk2w" [28febb11-2841-47d3-ae98-4f53347e568d] Running
	I0723 14:16:45.119891   29532 system_pods.go:89] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:16:45.119896   29532 system_pods.go:89] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:16:45.119900   29532 system_pods.go:89] "kube-scheduler-ha-533645-m03" [92b55f29-a3c2-418b-9575-b2a60e52ad62] Running
	I0723 14:16:45.119904   29532 system_pods.go:89] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:16:45.119910   29532 system_pods.go:89] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:16:45.119914   29532 system_pods.go:89] "kube-vip-ha-533645-m03" [ffece806-d630-4ffe-9a91-9c94311508f0] Running
	I0723 14:16:45.119918   29532 system_pods.go:89] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:16:45.119926   29532 system_pods.go:126] duration metric: took 210.68981ms to wait for k8s-apps to be running ...
	I0723 14:16:45.119936   29532 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 14:16:45.119987   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:16:45.134807   29532 system_svc.go:56] duration metric: took 14.864593ms WaitForService to wait for kubelet
	I0723 14:16:45.134832   29532 kubeadm.go:582] duration metric: took 24.161941777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:16:45.134850   29532 node_conditions.go:102] verifying NodePressure condition ...
	I0723 14:16:45.305160   29532 request.go:629] Waited for 170.246266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes
	I0723 14:16:45.305209   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes
	I0723 14:16:45.305214   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:45.305221   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:45.305229   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:45.309334   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:45.310713   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:16:45.310735   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:16:45.310749   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:16:45.310753   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:16:45.310759   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:16:45.310764   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:16:45.310770   29532 node_conditions.go:105] duration metric: took 175.91549ms to run NodePressure ...
	I0723 14:16:45.310783   29532 start.go:241] waiting for startup goroutines ...
	I0723 14:16:45.310811   29532 start.go:255] writing updated cluster config ...
	I0723 14:16:45.311165   29532 ssh_runner.go:195] Run: rm -f paused
	I0723 14:16:45.361735   29532 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 14:16:45.363591   29532 out.go:177] * Done! kubectl is now configured to use "ha-533645" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.777612789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f9491e8-9558-46a4-b0b8-a47c0c6df663 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.778427991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=721814b7-0d42-4a3c-8c0e-525541b95dc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.778877230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744426778855738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=721814b7-0d42-4a3c-8c0e-525541b95dc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.779457607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bd2fdc1-0e2a-412b-8847-af3384799871 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.779520523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bd2fdc1-0e2a-412b-8847-af3384799871 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.779786761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bd2fdc1-0e2a-412b-8847-af3384799871 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.789978389Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddc594eb-f7fb-431e-b7a6-62ca0c8c7a14 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.790942563Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-cd87c,Uid:c96075c6-138f-49ca-80af-c75e842c5852,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744207484085195,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:16:46.274827348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrvbf,Uid:ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721744046119612900,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.809780225Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744046111338700,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-23T14:14:05.803689838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-s6xzz,Uid:926a30df-71f1-48d7-92fb-ead057f2504d,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1721744046102839801,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.795958697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&PodSandboxMetadata{Name:kindnet-99vkr,Uid:495ea524-de15-401d-9ed3-fec375bc8042,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744029809613826,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.495076967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&PodSandboxMetadata{Name:kube-proxy-9wh4w,Uid:d9eb4982-e145-42cf-9a84-6013d7cdd3aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744029807470232,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.486258898Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&PodSandboxMetadata{Name:etcd-ha-533645,Uid:0116d3bd9333422ee3ba97043c03c966,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721744010425473909,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.103:2379,kubernetes.io/config.hash: 0116d3bd9333422ee3ba97043c03c966,kubernetes.io/config.seen: 2024-07-23T14:13:29.926300650Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-533645,Uid:a779b56396ae961a52b991bf79e41c79,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010422906791,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a779b56396ae961a52b991bf79e41c79,kubernetes.io/config.seen: 2024-07-23T14:13:29.926307126Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-533645,Uid:6de7f3c8e278c087425628d1b79c1d22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010404618151,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6de7f3c8e278c087425628d1b79c1d22,kubernetes.io/config.seen: 2024-07-23T14:13:29.926308682Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc95369f4505809db69c
a9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-533645,Uid:5693e50c5ce4a113bda653dc5ed85d89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010394627656,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.103:8443,kubernetes.io/config.hash: 5693e50c5ce4a113bda653dc5ed85d89,kubernetes.io/config.seen: 2024-07-23T14:13:29.926305579Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-533645,Uid:9f8fbea26449d1f00f1c8649ad6192db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010386223112,Label
s:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{kubernetes.io/config.hash: 9f8fbea26449d1f00f1c8649ad6192db,kubernetes.io/config.seen: 2024-07-23T14:13:29.926310051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ddc594eb-f7fb-431e-b7a6-62ca0c8c7a14 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.791820275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ceba6504-583f-4523-a48d-9b07ce9ee3d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.791880092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ceba6504-583f-4523-a48d-9b07ce9ee3d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.792109968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ceba6504-583f-4523-a48d-9b07ce9ee3d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.825016882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6755255-87dd-41e6-a8c3-b65424b5b0f8 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.825087632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6755255-87dd-41e6-a8c3-b65424b5b0f8 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.826267361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f1f8aa6-4d19-4cea-b7fc-6cf8f2e53860 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.826730542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744426826705641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f1f8aa6-4d19-4cea-b7fc-6cf8f2e53860 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.827354455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4eb07e4e-06a4-47f7-b0e3-c6b1c3090db3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.827434914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4eb07e4e-06a4-47f7-b0e3-c6b1c3090db3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.827691697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4eb07e4e-06a4-47f7-b0e3-c6b1c3090db3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.865948417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=302bcd8a-559a-415c-9861-5a311fad897f name=/runtime.v1.RuntimeService/Version
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.866022410Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=302bcd8a-559a-415c-9861-5a311fad897f name=/runtime.v1.RuntimeService/Version
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.867233681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afcafb0a-33e9-4f8d-89f2-7ffc68f89401 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.867695747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744426867673397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afcafb0a-33e9-4f8d-89f2-7ffc68f89401 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.868384003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f425f56-9df2-4d8d-94b4-1925043d2e93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.868444031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f425f56-9df2-4d8d-94b4-1925043d2e93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:20:26 ha-533645 crio[675]: time="2024-07-23 14:20:26.868702110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f425f56-9df2-4d8d-94b4-1925043d2e93 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	01ba0f9525e42       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8e48b2467dce8       busybox-fc5497c4f-cd87c
	875e4306cadef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   67e32a92d8db3       coredns-7db6d8ff4d-nrvbf
	c272094e83046       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   a7feedf1d20d0       coredns-7db6d8ff4d-s6xzz
	ee98d1058de99       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   bc76cb45947ed       storage-provisioner
	204bd8ec5a070       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   08c39cde805a7       kindnet-99vkr
	1d5b9787b76de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   8cb09524a9c81       kube-proxy-9wh4w
	a208ea67ea379       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   5e993964684c6       kube-vip-ha-533645
	76bcad60035c6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   c988725ad6a30       kube-controller-manager-ha-533645
	081aaa8c6121c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   5d23d91d7b6c3       etcd-ha-533645
	7972ddd5dc32d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   17bfeff63e984       kube-scheduler-ha-533645
	e28c0ebf351e0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   bc95369f45058       kube-apiserver-ha-533645
	
	
	==> coredns [875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219] <==
	[INFO] 10.244.0.4:45062 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281951s
	[INFO] 10.244.0.4:39795 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002944241s
	[INFO] 10.244.0.4:33788 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001262s
	[INFO] 10.244.0.4:49837 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156655s
	[INFO] 10.244.0.4:37869 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111262s
	[INFO] 10.244.0.4:49583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187618s
	[INFO] 10.244.0.4:47929 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087678s
	[INFO] 10.244.2.2:38089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189381s
	[INFO] 10.244.2.2:42424 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002105089s
	[INFO] 10.244.2.2:44423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066747s
	[INFO] 10.244.1.2:32850 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770779s
	[INFO] 10.244.1.2:53620 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074588s
	[INFO] 10.244.1.2:33169 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009318s
	[INFO] 10.244.0.4:47876 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009475s
	[INFO] 10.244.2.2:42045 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092251s
	[INFO] 10.244.2.2:58530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137054s
	[INFO] 10.244.1.2:36698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167251s
	[INFO] 10.244.1.2:56144 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082378s
	[INFO] 10.244.1.2:37800 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138485s
	[INFO] 10.244.0.4:35800 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198717s
	[INFO] 10.244.0.4:55540 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113741s
	[INFO] 10.244.0.4:40041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000256677s
	[INFO] 10.244.1.2:51609 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132031s
	[INFO] 10.244.1.2:56610 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023971s
	[INFO] 10.244.1.2:42525 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084914s
	
	
	==> coredns [c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46] <==
	[INFO] 10.244.1.2:37484 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000094208s
	[INFO] 10.244.1.2:41079 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001672282s
	[INFO] 10.244.0.4:39127 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003361091s
	[INFO] 10.244.2.2:49158 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214966s
	[INFO] 10.244.2.2:52807 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149002s
	[INFO] 10.244.2.2:36170 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001374503s
	[INFO] 10.244.2.2:32919 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148684s
	[INFO] 10.244.2.2:33222 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130497s
	[INFO] 10.244.1.2:41720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132072s
	[INFO] 10.244.1.2:46039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136478s
	[INFO] 10.244.1.2:42265 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246596s
	[INFO] 10.244.1.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106745s
	[INFO] 10.244.1.2:42065 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173598s
	[INFO] 10.244.0.4:49694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097989s
	[INFO] 10.244.0.4:55332 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105679s
	[INFO] 10.244.0.4:55778 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057634s
	[INFO] 10.244.2.2:46643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151446s
	[INFO] 10.244.2.2:47656 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125295s
	[INFO] 10.244.1.2:33099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116864s
	[INFO] 10.244.0.4:43829 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233901s
	[INFO] 10.244.2.2:39898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180683s
	[INFO] 10.244.2.2:53185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148942s
	[INFO] 10.244.2.2:36301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000319769s
	[INFO] 10.244.2.2:54739 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011416s
	[INFO] 10.244.1.2:40740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148117s
	
	
	==> describe nodes <==
	Name:               ha-533645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_13_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:20:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:14:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-533645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 016f247620dd4139a26ce62f3129dde1
	  System UUID:                016f2476-20dd-4139-a26c-e62f3129dde1
	  Boot ID:                    218264a1-e12e-486d-a0c2-4ec59bc9cd30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cd87c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 coredns-7db6d8ff4d-nrvbf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m38s
	  kube-system                 coredns-7db6d8ff4d-s6xzz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m38s
	  kube-system                 etcd-ha-533645                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m52s
	  kube-system                 kindnet-99vkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m38s
	  kube-system                 kube-apiserver-ha-533645             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  kube-system                 kube-controller-manager-ha-533645    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  kube-system                 kube-proxy-9wh4w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 kube-scheduler-ha-533645             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  kube-system                 kube-vip-ha-533645                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m36s                  kube-proxy       
	  Normal  Starting                 6m58s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m57s (x7 over 6m58s)  kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m57s (x8 over 6m58s)  kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m57s (x8 over 6m58s)  kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m51s                  kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m51s                  kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m51s                  kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m39s                  node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal  NodeReady                6m22s                  kubelet          Node ha-533645 status is now: NodeReady
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	
	
	Name:               ha-533645-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_15_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:15:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:17:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-533645-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 024bddfd48eb471b960e0dab2d3cd45b
	  System UUID:                024bddfd-48eb-471b-960e-0dab2d3cd45b
	  Boot ID:                    151372c0-a26e-4262-8f8f-67f30f77aff3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tlvlp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 etcd-ha-533645-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-95sfh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m25s
	  kube-system                 kube-apiserver-ha-533645-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-controller-manager-ha-533645-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-proxy-p25cg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-ha-533645-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-vip-ha-533645-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-533645-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-533645-m02 status is now: NodeNotReady
	
	
	Name:               ha-533645-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_16_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:16:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-533645-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58ea8f3065de44aea0aac5ffb591660d
	  System UUID:                58ea8f30-65de-44ae-a0aa-c5ffb591660d
	  Boot ID:                    a51eb8ca-a3c9-4da0-bf41-6ea9d59a8829
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kq2ww                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 etcd-ha-533645-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m8s
	  kube-system                 kindnet-99qsf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m10s
	  kube-system                 kube-apiserver-ha-533645-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-ha-533645-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-proxy-xsk2w                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-ha-533645-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-vip-ha-533645-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-533645-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	
	
	Name:               ha-533645-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_17_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:17:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:20:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-533645-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d58ceb89e2492c9f4ada3b3365c263
	  System UUID:                c6d58ceb-89e2-492c-9f4a-da3b3365c263
	  Boot ID:                    02dbcde4-8925-40e5-a9f0-f49b7734fc1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-f4tkn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-nz528    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-533645-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-533645-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul23 14:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050205] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036036] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.689589] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.850291] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.556173] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.424464] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.065789] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058371] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.157255] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.139843] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.253665] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.906302] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.745369] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.058504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.271647] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.077951] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.844081] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.054308] kauditd_printk_skb: 34 callbacks suppressed
	[Jul23 14:15] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e] <==
	{"level":"warn","ts":"2024-07-23T14:20:27.145064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.151462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.164559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.174985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.18286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.186562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.190011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.194506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.200317Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.20241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.206681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.208596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.217212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.221333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.225352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.235189Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.244632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.252036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.255983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.259083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.261765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.266637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.27438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.281298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:20:27.30227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:20:27 up 7 min,  0 users,  load average: 0.26, 0.23, 0.11
	Linux ha-533645 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493] <==
	I0723 14:19:55.723846       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:20:05.723941       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:20:05.724003       1 main.go:299] handling current node
	I0723 14:20:05.724022       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:20:05.724028       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:20:05.724315       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:20:05.724337       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:20:05.724404       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:20:05.724421       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:20:15.722803       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:20:15.722974       1 main.go:299] handling current node
	I0723 14:20:15.723016       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:20:15.723036       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:20:15.723319       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:20:15.723356       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:20:15.723439       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:20:15.723460       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:20:25.726371       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:20:25.726428       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:20:25.726585       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:20:25.726606       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:20:25.726665       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:20:25.726681       1 main.go:299] handling current node
	I0723 14:20:25.726703       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:20:25.726717       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3] <==
	W0723 14:13:35.234036       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103]
	I0723 14:13:35.235051       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:13:35.239042       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0723 14:13:35.345552       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0723 14:13:36.880343       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0723 14:13:36.910767       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0723 14:13:36.928200       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 14:13:49.204695       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0723 14:13:49.454603       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0723 14:16:51.018356       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53594: use of closed network connection
	E0723 14:16:51.203052       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53600: use of closed network connection
	E0723 14:16:51.390482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53628: use of closed network connection
	E0723 14:16:51.577002       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53648: use of closed network connection
	E0723 14:16:51.765569       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53664: use of closed network connection
	E0723 14:16:51.951051       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53682: use of closed network connection
	E0723 14:16:52.124107       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53704: use of closed network connection
	E0723 14:16:52.305698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53714: use of closed network connection
	E0723 14:16:52.477519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53730: use of closed network connection
	E0723 14:16:52.752630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53746: use of closed network connection
	E0723 14:16:52.955792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53764: use of closed network connection
	E0723 14:16:53.126631       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53784: use of closed network connection
	E0723 14:16:53.301502       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53802: use of closed network connection
	E0723 14:16:53.478848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53824: use of closed network connection
	E0723 14:16:53.647412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53850: use of closed network connection
	W0723 14:18:15.250667       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.127]
	
	
	==> kube-controller-manager [76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c] <==
	I0723 14:16:18.614450       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-533645-m03"
	I0723 14:16:46.277524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.174892ms"
	I0723 14:16:46.317741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.901363ms"
	I0723 14:16:46.484553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.669785ms"
	I0723 14:16:46.571277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.424291ms"
	I0723 14:16:46.630592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.176373ms"
	E0723 14:16:46.630662       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:16:46.649326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.545549ms"
	I0723 14:16:46.649885       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.79µs"
	I0723 14:16:46.734858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.646634ms"
	I0723 14:16:46.735172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="213.856µs"
	I0723 14:16:49.337385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.570376ms"
	I0723 14:16:49.337563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.438µs"
	I0723 14:16:50.457869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.14968ms"
	I0723 14:16:50.457952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.352µs"
	I0723 14:16:50.590664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.043605ms"
	I0723 14:16:50.590965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.8µs"
	E0723 14:17:26.166343       1 certificate_controller.go:146] Sync csr-5xkvd failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-5xkvd": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:17:26.447459       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-533645-m04\" does not exist"
	I0723 14:17:26.466812       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-533645-m04" podCIDRs=["10.244.3.0/24"]
	I0723 14:17:28.627640       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-533645-m04"
	I0723 14:17:46.647314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-533645-m04"
	I0723 14:18:38.667971       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-533645-m04"
	I0723 14:18:38.830910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.20937ms"
	I0723 14:18:38.831100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.783µs"
	
	
	==> kube-proxy [1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e] <==
	I0723 14:13:50.430698       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:13:50.446236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0723 14:13:50.513939       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:13:50.513988       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:13:50.514006       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:13:50.517541       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:13:50.517784       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:13:50.517815       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:13:50.523174       1 config.go:192] "Starting service config controller"
	I0723 14:13:50.523448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:13:50.523931       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:13:50.523955       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:13:50.524688       1 config.go:319] "Starting node config controller"
	I0723 14:13:50.524712       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:13:50.624948       1 shared_informer.go:320] Caches are synced for node config
	I0723 14:13:50.624996       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:13:50.625037       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090] <==
	W0723 14:13:34.320386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:13:34.320435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:13:34.479391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:13:34.479431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:13:34.514764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:13:34.514882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:13:34.617221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:13:34.617366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:13:34.719034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 14:13:34.719229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 14:13:34.730376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:13:34.730419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:13:34.812082       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:13:34.812162       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 14:13:37.973607       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0723 14:16:46.281408       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cd87c\": pod busybox-fc5497c4f-cd87c is already assigned to node \"ha-533645\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-cd87c" node="ha-533645"
	E0723 14:16:46.281593       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cd87c\": pod busybox-fc5497c4f-cd87c is already assigned to node \"ha-533645\"" pod="default/busybox-fc5497c4f-cd87c"
	E0723 14:17:26.517858       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nz528\": pod kube-proxy-nz528 is already assigned to node \"ha-533645-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nz528" node="ha-533645-m04"
	E0723 14:17:26.518053       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f058c988-f8e0-477d-9e96-73e0ee09d91e(kube-system/kube-proxy-nz528) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nz528"
	E0723 14:17:26.518185       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nz528\": pod kube-proxy-nz528 is already assigned to node \"ha-533645-m04\"" pod="kube-system/kube-proxy-nz528"
	I0723 14:17:26.518229       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nz528" node="ha-533645-m04"
	E0723 14:17:26.535903       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f4tkn\": pod kindnet-f4tkn is already assigned to node \"ha-533645-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-f4tkn" node="ha-533645-m04"
	E0723 14:17:26.538926       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2694466c-e2cd-480a-b713-2e1cd5cfdb00(kube-system/kindnet-f4tkn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f4tkn"
	E0723 14:17:26.539006       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f4tkn\": pod kindnet-f4tkn is already assigned to node \"ha-533645-m04\"" pod="kube-system/kindnet-f4tkn"
	I0723 14:17:26.539031       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f4tkn" node="ha-533645-m04"
	
	
	==> kubelet <==
	Jul 23 14:16:36 ha-533645 kubelet[1366]: E0723 14:16:36.828432    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:16:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:16:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:16:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:16:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:16:46 ha-533645 kubelet[1366]: I0723 14:16:46.273673    1366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nrvbf" podStartSLOduration=177.273606251 podStartE2EDuration="2m57.273606251s" podCreationTimestamp="2024-07-23 14:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-23 14:14:07.012954857 +0000 UTC m=+30.343295672" watchObservedRunningTime="2024-07-23 14:16:46.273606251 +0000 UTC m=+189.603947073"
	Jul 23 14:16:46 ha-533645 kubelet[1366]: I0723 14:16:46.274979    1366 topology_manager.go:215] "Topology Admit Handler" podUID="c96075c6-138f-49ca-80af-c75e842c5852" podNamespace="default" podName="busybox-fc5497c4f-cd87c"
	Jul 23 14:16:46 ha-533645 kubelet[1366]: W0723 14:16:46.283802    1366 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-533645" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-533645' and this object
	Jul 23 14:16:46 ha-533645 kubelet[1366]: E0723 14:16:46.284590    1366 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-533645" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-533645' and this object
	Jul 23 14:16:46 ha-533645 kubelet[1366]: I0723 14:16:46.365660    1366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjl9z\" (UniqueName: \"kubernetes.io/projected/c96075c6-138f-49ca-80af-c75e842c5852-kube-api-access-fjl9z\") pod \"busybox-fc5497c4f-cd87c\" (UID: \"c96075c6-138f-49ca-80af-c75e842c5852\") " pod="default/busybox-fc5497c4f-cd87c"
	Jul 23 14:17:36 ha-533645 kubelet[1366]: E0723 14:17:36.826847    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:17:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:17:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:17:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:17:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:18:36 ha-533645 kubelet[1366]: E0723 14:18:36.828476    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:18:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:18:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:18:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:18:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:19:36 ha-533645 kubelet[1366]: E0723 14:19:36.827754    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:19:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:19:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:19:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:19:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-533645 -n ha-533645
helpers_test.go:261: (dbg) Run:  kubectl --context ha-533645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (3.20890504s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:20:31.833232   34918 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:20:31.833494   34918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:31.833507   34918 out.go:304] Setting ErrFile to fd 2...
	I0723 14:20:31.833513   34918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:31.833726   34918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:20:31.833893   34918 out.go:298] Setting JSON to false
	I0723 14:20:31.833920   34918 mustload.go:65] Loading cluster: ha-533645
	I0723 14:20:31.834032   34918 notify.go:220] Checking for updates...
	I0723 14:20:31.834289   34918 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:20:31.834302   34918 status.go:255] checking status of ha-533645 ...
	I0723 14:20:31.834702   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:31.834743   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:31.853588   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0723 14:20:31.854070   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:31.854686   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:31.854713   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:31.855143   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:31.855375   34918 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:20:31.856863   34918 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:20:31.856878   34918 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:31.857234   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:31.857275   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:31.871788   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0723 14:20:31.872165   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:31.872676   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:31.872713   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:31.873025   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:31.873229   34918 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:20:31.876078   34918 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:31.876524   34918 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:31.876554   34918 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:31.876697   34918 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:31.877079   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:31.877121   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:31.892423   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0723 14:20:31.892787   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:31.893283   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:31.893310   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:31.893630   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:31.893839   34918 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:20:31.894042   34918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:31.894071   34918 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:20:31.896866   34918 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:31.897384   34918 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:31.897419   34918 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:31.897483   34918 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:20:31.897651   34918 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:20:31.897792   34918 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:20:31.897973   34918 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:20:31.981878   34918 ssh_runner.go:195] Run: systemctl --version
	I0723 14:20:31.987837   34918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:32.003889   34918 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:32.003916   34918 api_server.go:166] Checking apiserver status ...
	I0723 14:20:32.003945   34918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:32.020345   34918 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:20:32.031484   34918 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:32.031538   34918 ssh_runner.go:195] Run: ls
	I0723 14:20:32.036006   34918 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:32.040246   34918 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:32.040272   34918 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:20:32.040285   34918 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:32.040305   34918 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:20:32.040710   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:32.040752   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:32.056463   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33895
	I0723 14:20:32.056895   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:32.057417   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:32.057441   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:32.057751   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:32.057923   34918 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:20:32.059588   34918 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:20:32.059606   34918 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:32.059928   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:32.059964   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:32.074462   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0723 14:20:32.074861   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:32.075366   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:32.075390   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:32.075687   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:32.075861   34918 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:20:32.078815   34918 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:32.079211   34918 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:32.079242   34918 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:32.079366   34918 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:32.079727   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:32.079771   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:32.094222   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0723 14:20:32.094682   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:32.095185   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:32.095205   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:32.095526   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:32.095700   34918 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:20:32.095900   34918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:32.095920   34918 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:20:32.098711   34918 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:32.099107   34918 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:32.099133   34918 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:32.099278   34918 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:20:32.099511   34918 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:20:32.099680   34918 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:20:32.099856   34918 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	W0723 14:20:34.650668   34918 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:20:34.650767   34918 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	E0723 14:20:34.650785   34918 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:34.650792   34918 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:20:34.650809   34918 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:34.650817   34918 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:20:34.651111   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:34.651155   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:34.665959   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0723 14:20:34.666423   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:34.666863   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:34.666885   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:34.667195   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:34.667397   34918 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:20:34.668995   34918 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:20:34.669010   34918 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:34.669274   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:34.669311   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:34.684298   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0723 14:20:34.684702   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:34.685198   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:34.685226   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:34.685527   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:34.685709   34918 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:20:34.688492   34918 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:34.688872   34918 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:34.688898   34918 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:34.689126   34918 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:34.689509   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:34.689559   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:34.704168   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0723 14:20:34.704629   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:34.705058   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:34.705093   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:34.705441   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:34.705746   34918 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:20:34.705982   34918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:34.706004   34918 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:20:34.708556   34918 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:34.708913   34918 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:34.708948   34918 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:34.709073   34918 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:20:34.709203   34918 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:20:34.709336   34918 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:20:34.709478   34918 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:20:34.790196   34918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:34.808807   34918 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:34.808839   34918 api_server.go:166] Checking apiserver status ...
	I0723 14:20:34.808881   34918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:34.824695   34918 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:20:34.837979   34918 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:34.838032   34918 ssh_runner.go:195] Run: ls
	I0723 14:20:34.842849   34918 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:34.849842   34918 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:34.849871   34918 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:20:34.849900   34918 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:34.849927   34918 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:20:34.850224   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:34.850265   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:34.865113   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36569
	I0723 14:20:34.865526   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:34.865955   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:34.865970   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:34.866323   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:34.866520   34918 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:20:34.868155   34918 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:20:34.868173   34918 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:34.868629   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:34.868675   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:34.883162   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0723 14:20:34.883575   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:34.884084   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:34.884109   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:34.884579   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:34.884735   34918 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:20:34.887345   34918 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:34.887819   34918 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:34.887833   34918 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:34.888077   34918 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:34.888477   34918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:34.888520   34918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:34.903484   34918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0723 14:20:34.903870   34918 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:34.904425   34918 main.go:141] libmachine: Using API Version  1
	I0723 14:20:34.904447   34918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:34.904738   34918 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:34.904925   34918 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:20:34.905107   34918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:34.905126   34918 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:20:34.907642   34918 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:34.907982   34918 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:34.908007   34918 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:34.908217   34918 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:20:34.908371   34918 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:20:34.908528   34918 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:20:34.908662   34918 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:20:34.985602   34918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:35.001035   34918 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (5.349357903s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:20:35.826033   35002 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:20:35.826253   35002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:35.826260   35002 out.go:304] Setting ErrFile to fd 2...
	I0723 14:20:35.826264   35002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:35.826476   35002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:20:35.826624   35002 out.go:298] Setting JSON to false
	I0723 14:20:35.826646   35002 mustload.go:65] Loading cluster: ha-533645
	I0723 14:20:35.826685   35002 notify.go:220] Checking for updates...
	I0723 14:20:35.826974   35002 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:20:35.826991   35002 status.go:255] checking status of ha-533645 ...
	I0723 14:20:35.827341   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:35.827396   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:35.847340   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0723 14:20:35.847720   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:35.848233   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:35.848252   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:35.848648   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:35.848874   35002 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:20:35.850352   35002 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:20:35.850375   35002 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:35.850684   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:35.850721   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:35.865481   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I0723 14:20:35.865908   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:35.866373   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:35.866435   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:35.866764   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:35.866965   35002 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:20:35.869814   35002 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:35.870258   35002 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:35.870293   35002 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:35.870487   35002 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:35.870778   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:35.870819   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:35.887015   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I0723 14:20:35.887357   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:35.887780   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:35.887800   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:35.888104   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:35.888298   35002 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:20:35.888492   35002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:35.888515   35002 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:20:35.891159   35002 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:35.891550   35002 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:35.891589   35002 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:35.891780   35002 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:20:35.891958   35002 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:20:35.892125   35002 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:20:35.892286   35002 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:20:35.981595   35002 ssh_runner.go:195] Run: systemctl --version
	I0723 14:20:35.987342   35002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:36.004045   35002 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:36.004073   35002 api_server.go:166] Checking apiserver status ...
	I0723 14:20:36.004120   35002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:36.016829   35002 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:20:36.026299   35002 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:36.026348   35002 ssh_runner.go:195] Run: ls
	I0723 14:20:36.031019   35002 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:36.036979   35002 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:36.037000   35002 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:20:36.037010   35002 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:36.037025   35002 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:20:36.037334   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:36.037387   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:36.052123   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I0723 14:20:36.052557   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:36.053036   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:36.053059   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:36.053365   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:36.053573   35002 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:20:36.055011   35002 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:20:36.055029   35002 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:36.055322   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:36.055358   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:36.069643   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I0723 14:20:36.070018   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:36.070532   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:36.070558   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:36.070849   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:36.071045   35002 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:20:36.073763   35002 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:36.074147   35002 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:36.074180   35002 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:36.074300   35002 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:36.074645   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:36.074681   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:36.089785   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0723 14:20:36.090236   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:36.090655   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:36.090675   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:36.090986   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:36.091156   35002 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:20:36.091338   35002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:36.091364   35002 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:20:36.093973   35002 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:36.094341   35002 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:36.094355   35002 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:36.094516   35002 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:20:36.094694   35002 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:20:36.094823   35002 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:20:36.094938   35002 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	W0723 14:20:37.722669   35002 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:37.722724   35002 retry.go:31] will retry after 362.532688ms: dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:20:40.794626   35002 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:20:40.794725   35002 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	E0723 14:20:40.794746   35002 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:40.794757   35002 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:20:40.794782   35002 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:40.794793   35002 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:20:40.795119   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:40.795190   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:40.810121   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42369
	I0723 14:20:40.810629   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:40.811100   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:40.811121   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:40.811463   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:40.811612   35002 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:20:40.813125   35002 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:20:40.813143   35002 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:40.813456   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:40.813529   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:40.827857   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34685
	I0723 14:20:40.828375   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:40.829053   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:40.829086   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:40.829422   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:40.829605   35002 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:20:40.832348   35002 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:40.832780   35002 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:40.832811   35002 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:40.832952   35002 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:40.833233   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:40.833264   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:40.848088   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42019
	I0723 14:20:40.848488   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:40.849019   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:40.849038   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:40.849327   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:40.849647   35002 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:20:40.849932   35002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:40.849955   35002 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:20:40.853234   35002 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:40.853785   35002 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:40.853811   35002 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:40.853933   35002 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:20:40.854108   35002 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:20:40.854290   35002 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:20:40.854477   35002 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:20:40.933801   35002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:40.948860   35002 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:40.948888   35002 api_server.go:166] Checking apiserver status ...
	I0723 14:20:40.948921   35002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:40.962573   35002 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:20:40.972071   35002 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:40.972118   35002 ssh_runner.go:195] Run: ls
	I0723 14:20:40.976429   35002 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:40.980584   35002 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:40.980603   35002 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:20:40.980611   35002 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:40.980625   35002 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:20:40.980886   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:40.980915   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:40.995585   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0723 14:20:40.996007   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:40.996484   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:40.996509   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:40.996796   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:40.996995   35002 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:20:40.998293   35002 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:20:40.998308   35002 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:40.998626   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:40.998658   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:41.015053   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0723 14:20:41.015562   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:41.016006   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:41.016026   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:41.016269   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:41.016412   35002 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:20:41.019154   35002 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:41.019562   35002 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:41.019583   35002 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:41.019785   35002 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:41.020059   35002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:41.020093   35002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:41.036012   35002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I0723 14:20:41.036430   35002 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:41.036893   35002 main.go:141] libmachine: Using API Version  1
	I0723 14:20:41.036911   35002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:41.037207   35002 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:41.037360   35002 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:20:41.037528   35002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:41.037550   35002 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:20:41.040754   35002 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:41.041162   35002 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:41.041206   35002 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:41.041279   35002 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:20:41.041451   35002 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:20:41.041592   35002 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:20:41.041722   35002 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:20:41.117682   35002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:41.133103   35002 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (5.126006417s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:20:42.191648   35118 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:20:42.191905   35118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:42.191914   35118 out.go:304] Setting ErrFile to fd 2...
	I0723 14:20:42.191918   35118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:42.192169   35118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:20:42.192401   35118 out.go:298] Setting JSON to false
	I0723 14:20:42.192438   35118 mustload.go:65] Loading cluster: ha-533645
	I0723 14:20:42.192499   35118 notify.go:220] Checking for updates...
	I0723 14:20:42.192847   35118 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:20:42.192862   35118 status.go:255] checking status of ha-533645 ...
	I0723 14:20:42.193269   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:42.193346   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:42.212485   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0723 14:20:42.212915   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:42.213563   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:42.213605   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:42.213918   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:42.214092   35118 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:20:42.215790   35118 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:20:42.215808   35118 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:42.216121   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:42.216155   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:42.231331   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I0723 14:20:42.231842   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:42.232313   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:42.232338   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:42.232675   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:42.232861   35118 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:20:42.236076   35118 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:42.236484   35118 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:42.236512   35118 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:42.236703   35118 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:42.237105   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:42.237147   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:42.252363   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I0723 14:20:42.252762   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:42.253208   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:42.253229   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:42.253551   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:42.253743   35118 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:20:42.253937   35118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:42.253960   35118 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:20:42.256833   35118 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:42.257223   35118 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:42.257254   35118 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:42.257430   35118 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:20:42.257598   35118 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:20:42.257763   35118 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:20:42.257890   35118 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:20:42.341911   35118 ssh_runner.go:195] Run: systemctl --version
	I0723 14:20:42.347887   35118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:42.361853   35118 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:42.361883   35118 api_server.go:166] Checking apiserver status ...
	I0723 14:20:42.361925   35118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:42.374737   35118 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:20:42.383043   35118 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:42.383096   35118 ssh_runner.go:195] Run: ls
	I0723 14:20:42.386912   35118 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:42.390923   35118 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:42.390943   35118 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:20:42.390954   35118 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:42.390975   35118 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:20:42.391295   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:42.391333   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:42.407089   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0723 14:20:42.407477   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:42.407956   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:42.407980   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:42.408293   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:42.408489   35118 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:20:42.410405   35118 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:20:42.410422   35118 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:42.410695   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:42.410729   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:42.426121   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0723 14:20:42.426578   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:42.427083   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:42.427100   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:42.427436   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:42.427598   35118 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:20:42.430819   35118 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:42.431314   35118 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:42.431341   35118 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:42.431448   35118 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:42.431779   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:42.431825   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:42.447296   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37779
	I0723 14:20:42.447712   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:42.448235   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:42.448252   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:42.448589   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:42.448769   35118 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:20:42.448960   35118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:42.448986   35118 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:20:42.451946   35118 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:42.452407   35118 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:42.452427   35118 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:42.452554   35118 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:20:42.452749   35118 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:20:42.452893   35118 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:20:42.453005   35118 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	W0723 14:20:43.866643   35118 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:43.866685   35118 retry.go:31] will retry after 257.573126ms: dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:20:46.938595   35118 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:20:46.938694   35118 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	E0723 14:20:46.938712   35118 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:46.938719   35118 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:20:46.938736   35118 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:46.938743   35118 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:20:46.939033   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:46.939071   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:46.955706   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37951
	I0723 14:20:46.956131   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:46.956651   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:46.956674   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:46.957078   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:46.957287   35118 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:20:46.959138   35118 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:20:46.959157   35118 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:46.959478   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:46.959515   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:46.975238   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0723 14:20:46.975609   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:46.976094   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:46.976132   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:46.976448   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:46.976655   35118 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:20:46.979236   35118 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:46.979618   35118 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:46.979641   35118 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:46.979753   35118 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:46.980037   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:46.980072   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:46.994728   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37889
	I0723 14:20:46.995198   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:46.995760   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:46.995786   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:46.996060   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:46.996251   35118 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:20:46.996475   35118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:46.996503   35118 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:20:46.999628   35118 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:47.000046   35118 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:47.000066   35118 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:47.000216   35118 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:20:47.000423   35118 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:20:47.000579   35118 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:20:47.000740   35118 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:20:47.081112   35118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:47.095729   35118 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:47.095765   35118 api_server.go:166] Checking apiserver status ...
	I0723 14:20:47.095797   35118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:47.108491   35118 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:20:47.118283   35118 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:47.118349   35118 ssh_runner.go:195] Run: ls
	I0723 14:20:47.122561   35118 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:47.128124   35118 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:47.128148   35118 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:20:47.128158   35118 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:47.128176   35118 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:20:47.128486   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:47.128525   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:47.144616   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0723 14:20:47.145048   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:47.145535   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:47.145571   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:47.145852   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:47.146038   35118 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:20:47.147522   35118 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:20:47.147547   35118 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:47.147924   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:47.147967   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:47.162809   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0723 14:20:47.163185   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:47.163637   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:47.163655   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:47.163928   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:47.164161   35118 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:20:47.166729   35118 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:47.167121   35118 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:47.167158   35118 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:47.167266   35118 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:47.167676   35118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:47.167709   35118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:47.182057   35118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0723 14:20:47.182457   35118 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:47.182933   35118 main.go:141] libmachine: Using API Version  1
	I0723 14:20:47.182960   35118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:47.183251   35118 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:47.183426   35118 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:20:47.183600   35118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:47.183619   35118 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:20:47.186566   35118 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:47.186995   35118 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:47.187030   35118 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:47.187148   35118 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:20:47.187317   35118 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:20:47.187422   35118 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:20:47.187536   35118 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:20:47.261214   35118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:47.276648   35118 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (3.737126789s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:20:49.832502   35218 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:20:49.833022   35218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:49.833048   35218 out.go:304] Setting ErrFile to fd 2...
	I0723 14:20:49.833058   35218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:49.833251   35218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:20:49.833427   35218 out.go:298] Setting JSON to false
	I0723 14:20:49.833451   35218 mustload.go:65] Loading cluster: ha-533645
	I0723 14:20:49.833574   35218 notify.go:220] Checking for updates...
	I0723 14:20:49.833847   35218 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:20:49.833865   35218 status.go:255] checking status of ha-533645 ...
	I0723 14:20:49.834208   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:49.834249   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:49.853693   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0723 14:20:49.854174   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:49.854794   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:49.854819   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:49.855167   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:49.855355   35218 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:20:49.856844   35218 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:20:49.856862   35218 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:49.857240   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:49.857294   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:49.872985   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I0723 14:20:49.873428   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:49.873954   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:49.873977   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:49.874281   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:49.874542   35218 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:20:49.877584   35218 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:49.878023   35218 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:49.878046   35218 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:49.878286   35218 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:49.878651   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:49.878697   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:49.893661   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39187
	I0723 14:20:49.894112   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:49.894683   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:49.894710   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:49.895011   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:49.895179   35218 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:20:49.895385   35218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:49.895416   35218 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:20:49.898479   35218 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:49.898989   35218 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:49.899014   35218 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:49.899260   35218 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:20:49.899429   35218 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:20:49.899608   35218 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:20:49.899743   35218 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:20:49.985888   35218 ssh_runner.go:195] Run: systemctl --version
	I0723 14:20:49.991594   35218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:50.006760   35218 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:50.006794   35218 api_server.go:166] Checking apiserver status ...
	I0723 14:20:50.006833   35218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:50.021812   35218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:20:50.031254   35218 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:50.031328   35218 ssh_runner.go:195] Run: ls
	I0723 14:20:50.035564   35218 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:50.039835   35218 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:50.039863   35218 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:20:50.039877   35218 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:50.039898   35218 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:20:50.040227   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:50.040270   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:50.055212   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0723 14:20:50.055578   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:50.056037   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:50.056059   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:50.056388   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:50.056685   35218 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:20:50.058281   35218 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:20:50.058294   35218 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:50.058628   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:50.058660   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:50.072677   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0723 14:20:50.073118   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:50.073559   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:50.073593   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:50.073911   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:50.074076   35218 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:20:50.076628   35218 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:50.077039   35218 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:50.077084   35218 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:50.077186   35218 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:50.077513   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:50.077542   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:50.093334   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I0723 14:20:50.093706   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:50.094157   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:50.094178   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:50.094527   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:50.094766   35218 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:20:50.094981   35218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:50.095004   35218 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:20:50.097831   35218 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:50.098267   35218 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:50.098293   35218 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:50.098414   35218 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:20:50.098552   35218 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:20:50.098692   35218 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:20:50.098799   35218 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	W0723 14:20:53.178674   35218 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:20:53.178771   35218 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	E0723 14:20:53.178786   35218 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:53.178795   35218 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:20:53.178817   35218 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:20:53.178824   35218 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:20:53.179116   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:53.179151   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:53.195074   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0723 14:20:53.195503   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:53.195969   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:53.195990   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:53.196263   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:53.196570   35218 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:20:53.198222   35218 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:20:53.198237   35218 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:53.198568   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:53.198613   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:53.212888   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0723 14:20:53.213388   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:53.213879   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:53.213899   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:53.214203   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:53.214400   35218 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:20:53.217032   35218 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:53.217392   35218 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:53.217409   35218 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:53.217572   35218 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:20:53.217881   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:53.217918   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:53.232398   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0723 14:20:53.232753   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:53.233194   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:53.233231   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:53.233490   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:53.233698   35218 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:20:53.233897   35218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:53.233918   35218 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:20:53.236478   35218 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:53.236863   35218 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:20:53.236905   35218 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:20:53.236999   35218 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:20:53.237253   35218 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:20:53.237435   35218 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:20:53.237580   35218 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:20:53.317697   35218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:53.338230   35218 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:53.338258   35218 api_server.go:166] Checking apiserver status ...
	I0723 14:20:53.338333   35218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:53.353689   35218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:20:53.365046   35218 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:53.365105   35218 ssh_runner.go:195] Run: ls
	I0723 14:20:53.369785   35218 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:53.374492   35218 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:53.374513   35218 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:20:53.374521   35218 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:53.374534   35218 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:20:53.374832   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:53.374862   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:53.390539   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I0723 14:20:53.390891   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:53.391509   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:53.391531   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:53.391866   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:53.392056   35218 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:20:53.393617   35218 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:20:53.393635   35218 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:53.394025   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:53.394070   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:53.411718   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0723 14:20:53.412139   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:53.412640   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:53.412663   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:53.412981   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:53.413167   35218 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:20:53.415980   35218 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:53.416401   35218 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:53.416427   35218 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:53.416559   35218 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:20:53.416956   35218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:53.417007   35218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:53.432033   35218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0723 14:20:53.432515   35218 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:53.432978   35218 main.go:141] libmachine: Using API Version  1
	I0723 14:20:53.433002   35218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:53.433277   35218 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:53.433450   35218 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:20:53.433628   35218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:53.433649   35218 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:20:53.436952   35218 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:53.437397   35218 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:20:53.437422   35218 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:20:53.437569   35218 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:20:53.437750   35218 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:20:53.437876   35218 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:20:53.438004   35218 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:20:53.513749   35218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:53.529221   35218 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (3.695441613s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:20:57.291388   35336 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:20:57.291524   35336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:57.291533   35336 out.go:304] Setting ErrFile to fd 2...
	I0723 14:20:57.291539   35336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:20:57.291733   35336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:20:57.291900   35336 out.go:298] Setting JSON to false
	I0723 14:20:57.291928   35336 mustload.go:65] Loading cluster: ha-533645
	I0723 14:20:57.292036   35336 notify.go:220] Checking for updates...
	I0723 14:20:57.292373   35336 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:20:57.292391   35336 status.go:255] checking status of ha-533645 ...
	I0723 14:20:57.292868   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:57.292913   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:57.310803   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0723 14:20:57.311256   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:57.311753   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:20:57.311774   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:57.312188   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:57.312463   35336 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:20:57.314018   35336 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:20:57.314034   35336 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:57.314436   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:57.314475   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:57.328905   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0723 14:20:57.329262   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:57.329686   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:20:57.329705   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:57.330059   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:57.330243   35336 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:20:57.332826   35336 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:57.333216   35336 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:57.333249   35336 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:57.333401   35336 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:20:57.333782   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:57.333819   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:57.347934   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0723 14:20:57.348253   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:57.348691   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:20:57.348710   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:57.349084   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:57.349330   35336 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:20:57.349529   35336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:57.349566   35336 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:20:57.352610   35336 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:57.353041   35336 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:20:57.353070   35336 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:20:57.353200   35336 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:20:57.353397   35336 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:20:57.353542   35336 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:20:57.353683   35336 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:20:57.442319   35336 ssh_runner.go:195] Run: systemctl --version
	I0723 14:20:57.448259   35336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:20:57.462495   35336 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:20:57.462524   35336 api_server.go:166] Checking apiserver status ...
	I0723 14:20:57.462567   35336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:20:57.475618   35336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:20:57.483916   35336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:20:57.483956   35336 ssh_runner.go:195] Run: ls
	I0723 14:20:57.489360   35336 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:20:57.494147   35336 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:20:57.494174   35336 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:20:57.494185   35336 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:20:57.494206   35336 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:20:57.494526   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:57.494566   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:57.510300   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0723 14:20:57.510754   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:57.511199   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:20:57.511223   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:57.511587   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:57.511755   35336 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:20:57.513515   35336 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:20:57.513532   35336 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:57.513826   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:57.513869   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:57.528200   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0723 14:20:57.528550   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:57.528962   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:20:57.528987   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:57.529381   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:57.529608   35336 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:20:57.532362   35336 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:57.532770   35336 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:57.532799   35336 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:57.532924   35336 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:20:57.533290   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:20:57.533342   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:20:57.547694   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39363
	I0723 14:20:57.548060   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:20:57.548528   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:20:57.548548   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:20:57.548885   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:20:57.549132   35336 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:20:57.549341   35336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:20:57.549366   35336 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:20:57.552368   35336 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:57.552762   35336 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:20:57.552798   35336 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:20:57.552987   35336 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:20:57.553175   35336 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:20:57.553371   35336 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:20:57.553514   35336 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	W0723 14:21:00.602627   35336 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:21:00.602713   35336 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	E0723 14:21:00.602727   35336 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:21:00.602737   35336 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:21:00.602753   35336 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:21:00.602761   35336 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:21:00.603046   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:00.603092   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:00.617928   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0723 14:21:00.618438   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:00.618897   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:21:00.618921   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:00.619245   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:00.619448   35336 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:21:00.621119   35336 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:21:00.621139   35336 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:00.621562   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:00.621596   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:00.636061   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36171
	I0723 14:21:00.636547   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:00.637070   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:21:00.637094   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:00.637375   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:00.637542   35336 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:21:00.640288   35336 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:00.640685   35336 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:00.640723   35336 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:00.640797   35336 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:00.641085   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:00.641116   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:00.655551   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40643
	I0723 14:21:00.655973   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:00.656518   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:21:00.656549   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:00.656869   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:00.657052   35336 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:21:00.657247   35336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:00.657267   35336 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:21:00.660000   35336 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:00.660495   35336 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:00.660588   35336 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:00.660614   35336 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:21:00.660772   35336 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:21:00.660934   35336 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:21:00.661116   35336 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:21:00.746219   35336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:00.760015   35336 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:21:00.760045   35336 api_server.go:166] Checking apiserver status ...
	I0723 14:21:00.760095   35336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:21:00.773808   35336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:21:00.783477   35336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:21:00.783528   35336 ssh_runner.go:195] Run: ls
	I0723 14:21:00.788808   35336 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:21:00.795143   35336 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:21:00.795172   35336 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:21:00.795179   35336 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:00.795194   35336 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:21:00.795540   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:00.795577   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:00.809869   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37421
	I0723 14:21:00.810205   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:00.810732   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:21:00.810756   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:00.811099   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:00.811281   35336 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:21:00.812826   35336 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:21:00.812842   35336 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:00.813206   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:00.813245   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:00.827210   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0723 14:21:00.827586   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:00.828013   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:21:00.828029   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:00.828301   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:00.828471   35336 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:21:00.831131   35336 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:00.831601   35336 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:00.831628   35336 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:00.831767   35336 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:00.832042   35336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:00.832077   35336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:00.845957   35336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0723 14:21:00.846325   35336 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:00.846881   35336 main.go:141] libmachine: Using API Version  1
	I0723 14:21:00.846906   35336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:00.847256   35336 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:00.847491   35336 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:21:00.847687   35336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:00.847712   35336 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:21:00.850859   35336 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:00.851265   35336 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:00.851292   35336 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:00.851409   35336 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:21:00.851547   35336 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:21:00.851676   35336 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:21:00.851780   35336 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:21:00.929906   35336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:00.943871   35336 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (3.712539635s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:21:04.158731   35438 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:21:04.159168   35438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:04.159195   35438 out.go:304] Setting ErrFile to fd 2...
	I0723 14:21:04.159203   35438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:04.159820   35438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:21:04.160043   35438 out.go:298] Setting JSON to false
	I0723 14:21:04.160070   35438 mustload.go:65] Loading cluster: ha-533645
	I0723 14:21:04.160116   35438 notify.go:220] Checking for updates...
	I0723 14:21:04.160591   35438 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:21:04.160613   35438 status.go:255] checking status of ha-533645 ...
	I0723 14:21:04.161051   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:04.161117   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:04.176307   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I0723 14:21:04.176678   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:04.177169   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:04.177192   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:04.177531   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:04.177732   35438 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:21:04.179357   35438 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:21:04.179372   35438 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:21:04.179665   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:04.179696   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:04.195067   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43631
	I0723 14:21:04.195556   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:04.196019   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:04.196046   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:04.196376   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:04.196561   35438 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:21:04.199253   35438 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:04.199725   35438 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:21:04.199750   35438 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:04.199909   35438 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:21:04.200337   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:04.200387   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:04.215110   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0723 14:21:04.215512   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:04.216050   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:04.216074   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:04.216395   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:04.216601   35438 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:21:04.216821   35438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:04.216846   35438 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:21:04.219752   35438 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:04.220169   35438 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:21:04.220193   35438 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:04.220303   35438 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:21:04.220490   35438 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:21:04.220652   35438 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:21:04.220767   35438 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:21:04.306026   35438 ssh_runner.go:195] Run: systemctl --version
	I0723 14:21:04.312208   35438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:04.325847   35438 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:21:04.325875   35438 api_server.go:166] Checking apiserver status ...
	I0723 14:21:04.325903   35438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:21:04.339583   35438 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:21:04.348746   35438 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:21:04.348805   35438 ssh_runner.go:195] Run: ls
	I0723 14:21:04.353025   35438 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:21:04.358897   35438 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:21:04.358925   35438 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:21:04.358935   35438 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:04.358951   35438 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:21:04.359246   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:04.359306   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:04.375115   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
	I0723 14:21:04.375553   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:04.376036   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:04.376058   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:04.376339   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:04.376516   35438 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:21:04.378334   35438 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:21:04.378353   35438 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:21:04.378695   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:04.378742   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:04.394293   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0723 14:21:04.394708   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:04.395241   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:04.395266   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:04.395568   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:04.395770   35438 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:21:04.398715   35438 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:21:04.399110   35438 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:21:04.399136   35438 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:21:04.399268   35438 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:21:04.399663   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:04.399697   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:04.414237   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42309
	I0723 14:21:04.414690   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:04.415201   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:04.415225   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:04.415508   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:04.415698   35438 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:21:04.415875   35438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:04.415894   35438 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:21:04.418818   35438 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:21:04.419228   35438 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:21:04.419254   35438 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:21:04.419380   35438 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:21:04.419539   35438 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:21:04.419686   35438 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:21:04.419816   35438 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	W0723 14:21:07.482697   35438 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.182:22: connect: no route to host
	W0723 14:21:07.482805   35438 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	E0723 14:21:07.482824   35438 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:21:07.482854   35438 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:21:07.482878   35438 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	I0723 14:21:07.482885   35438 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:21:07.483192   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:07.483229   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:07.500552   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0723 14:21:07.501005   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:07.501543   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:07.501566   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:07.501943   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:07.502149   35438 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:21:07.503752   35438 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:21:07.503766   35438 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:07.504060   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:07.504131   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:07.520329   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0723 14:21:07.520729   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:07.521220   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:07.521250   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:07.521554   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:07.521751   35438 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:21:07.524573   35438 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:07.525048   35438 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:07.525068   35438 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:07.525255   35438 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:07.525565   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:07.525641   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:07.542296   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34235
	I0723 14:21:07.542771   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:07.543309   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:07.543332   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:07.543689   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:07.543998   35438 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:21:07.544206   35438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:07.544229   35438 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:21:07.547371   35438 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:07.547941   35438 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:07.547988   35438 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:07.548114   35438 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:21:07.548260   35438 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:21:07.548399   35438 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:21:07.548554   35438 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:21:07.629241   35438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:07.643947   35438 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:21:07.643972   35438 api_server.go:166] Checking apiserver status ...
	I0723 14:21:07.644006   35438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:21:07.657539   35438 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:21:07.666940   35438 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:21:07.666999   35438 ssh_runner.go:195] Run: ls
	I0723 14:21:07.670834   35438 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:21:07.677513   35438 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:21:07.677540   35438 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:21:07.677550   35438 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:07.677570   35438 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:21:07.677891   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:07.677927   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:07.693631   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0723 14:21:07.694054   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:07.694622   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:07.694648   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:07.694940   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:07.695110   35438 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:21:07.696492   35438 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:21:07.696511   35438 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:07.696843   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:07.696880   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:07.710980   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I0723 14:21:07.711450   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:07.711904   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:07.711923   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:07.712197   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:07.712404   35438 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:21:07.715464   35438 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:07.715883   35438 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:07.715928   35438 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:07.716028   35438 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:07.716327   35438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:07.716366   35438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:07.730812   35438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40741
	I0723 14:21:07.731269   35438 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:07.731754   35438 main.go:141] libmachine: Using API Version  1
	I0723 14:21:07.731773   35438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:07.732045   35438 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:07.732288   35438 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:21:07.732519   35438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:07.732539   35438 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:21:07.735393   35438 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:07.735750   35438 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:07.735767   35438 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:07.735951   35438 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:21:07.736203   35438 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:21:07.736369   35438 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:21:07.736498   35438 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:21:07.813571   35438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:07.828937   35438 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 7 (612.678275ms)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:21:15.489694   35573 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:21:15.489955   35573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:15.489964   35573 out.go:304] Setting ErrFile to fd 2...
	I0723 14:21:15.489969   35573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:15.490195   35573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:21:15.490432   35573 out.go:298] Setting JSON to false
	I0723 14:21:15.490461   35573 mustload.go:65] Loading cluster: ha-533645
	I0723 14:21:15.490575   35573 notify.go:220] Checking for updates...
	I0723 14:21:15.490982   35573 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:21:15.491002   35573 status.go:255] checking status of ha-533645 ...
	I0723 14:21:15.491540   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.491637   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.510650   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0723 14:21:15.511111   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.511697   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.511716   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.512150   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.512379   35573 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:21:15.514775   35573 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:21:15.514796   35573 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:21:15.515125   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.515580   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.530815   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44891
	I0723 14:21:15.531227   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.531755   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.531776   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.532224   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.532492   35573 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:21:15.535817   35573 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:15.536345   35573 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:21:15.536389   35573 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:15.536587   35573 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:21:15.536904   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.536951   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.552119   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36317
	I0723 14:21:15.552543   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.553004   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.553026   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.553304   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.553450   35573 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:21:15.553654   35573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:15.553687   35573 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:21:15.556759   35573 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:15.557205   35573 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:21:15.557242   35573 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:15.557388   35573 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:21:15.557554   35573 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:21:15.557700   35573 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:21:15.557824   35573 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:21:15.649557   35573 ssh_runner.go:195] Run: systemctl --version
	I0723 14:21:15.655483   35573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:15.670644   35573 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:21:15.670671   35573 api_server.go:166] Checking apiserver status ...
	I0723 14:21:15.670722   35573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:21:15.683709   35573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:21:15.691948   35573 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:21:15.691989   35573 ssh_runner.go:195] Run: ls
	I0723 14:21:15.696017   35573 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:21:15.700253   35573 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:21:15.700273   35573 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:21:15.700282   35573 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:15.700298   35573 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:21:15.700643   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.700678   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.716735   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I0723 14:21:15.717124   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.717584   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.717597   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.717885   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.718113   35573 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:21:15.719830   35573 status.go:330] ha-533645-m02 host status = "Stopped" (err=<nil>)
	I0723 14:21:15.719844   35573 status.go:343] host is not running, skipping remaining checks
	I0723 14:21:15.719852   35573 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:15.719874   35573 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:21:15.720155   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.720186   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.734544   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
	I0723 14:21:15.734967   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.735422   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.735447   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.735767   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.735963   35573 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:21:15.737610   35573 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:21:15.737625   35573 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:15.737920   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.737961   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.752242   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0723 14:21:15.752616   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.753036   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.753050   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.753376   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.753535   35573 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:21:15.756534   35573 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:15.757028   35573 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:15.757051   35573 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:15.757210   35573 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:15.757496   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.757527   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.773269   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41553
	I0723 14:21:15.773614   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.774064   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.774084   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.774365   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.774590   35573 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:21:15.774765   35573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:15.774783   35573 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:21:15.777540   35573 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:15.777917   35573 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:15.777949   35573 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:15.778082   35573 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:21:15.778239   35573 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:21:15.778444   35573 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:21:15.778595   35573 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:21:15.857611   35573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:15.871266   35573 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:21:15.871294   35573 api_server.go:166] Checking apiserver status ...
	I0723 14:21:15.871326   35573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:21:15.884859   35573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:21:15.894300   35573 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:21:15.894363   35573 ssh_runner.go:195] Run: ls
	I0723 14:21:15.898530   35573 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:21:15.902942   35573 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:21:15.902966   35573 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:21:15.902974   35573 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:15.902988   35573 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:21:15.903326   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.903374   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.918035   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33165
	I0723 14:21:15.918589   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.919066   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.919084   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.919436   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.919653   35573 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:21:15.921401   35573 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:21:15.921420   35573 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:15.921756   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.921792   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.938347   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38803
	I0723 14:21:15.938882   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.939367   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.939400   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.939703   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.939902   35573 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:21:15.943201   35573 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:15.943664   35573 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:15.943704   35573 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:15.943872   35573 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:15.944204   35573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:15.944246   35573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:15.959031   35573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0723 14:21:15.959475   35573 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:15.959957   35573 main.go:141] libmachine: Using API Version  1
	I0723 14:21:15.959983   35573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:15.960311   35573 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:15.960520   35573 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:21:15.960714   35573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:15.960733   35573 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:21:15.963621   35573 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:15.964011   35573 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:15.964040   35573 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:15.964257   35573 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:21:15.964422   35573 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:21:15.964618   35573 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:21:15.964761   35573 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:21:16.046162   35573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:16.060715   35573 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 7 (597.576757ms)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-533645-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:21:24.170349   35680 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:21:24.170642   35680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:24.170651   35680 out.go:304] Setting ErrFile to fd 2...
	I0723 14:21:24.170655   35680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:24.170832   35680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:21:24.170985   35680 out.go:298] Setting JSON to false
	I0723 14:21:24.171009   35680 mustload.go:65] Loading cluster: ha-533645
	I0723 14:21:24.171075   35680 notify.go:220] Checking for updates...
	I0723 14:21:24.171373   35680 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:21:24.171388   35680 status.go:255] checking status of ha-533645 ...
	I0723 14:21:24.171733   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.171786   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.191751   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0723 14:21:24.192182   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.192827   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.192847   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.193181   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.193373   35680 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:21:24.195173   35680 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:21:24.195195   35680 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:21:24.195543   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.195592   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.209988   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0723 14:21:24.210397   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.210829   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.210851   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.211164   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.211333   35680 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:21:24.214447   35680 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:24.214926   35680 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:21:24.214955   35680 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:24.215096   35680 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:21:24.215380   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.215417   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.230358   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0723 14:21:24.230773   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.231301   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.231326   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.231662   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.231829   35680 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:21:24.232033   35680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:24.232053   35680 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:21:24.234543   35680 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:24.234919   35680 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:21:24.234947   35680 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:21:24.235078   35680 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:21:24.235285   35680 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:21:24.235438   35680 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:21:24.235552   35680 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:21:24.317477   35680 ssh_runner.go:195] Run: systemctl --version
	I0723 14:21:24.323302   35680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:24.337667   35680 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:21:24.337695   35680 api_server.go:166] Checking apiserver status ...
	I0723 14:21:24.337734   35680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:21:24.353402   35680 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup
	W0723 14:21:24.363039   35680 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:21:24.363087   35680 ssh_runner.go:195] Run: ls
	I0723 14:21:24.367424   35680 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:21:24.373077   35680 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:21:24.373104   35680 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:21:24.373117   35680 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:24.373151   35680 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:21:24.373510   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.373543   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.389088   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0723 14:21:24.389425   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.389884   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.389904   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.390184   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.390386   35680 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:21:24.391809   35680 status.go:330] ha-533645-m02 host status = "Stopped" (err=<nil>)
	I0723 14:21:24.391824   35680 status.go:343] host is not running, skipping remaining checks
	I0723 14:21:24.391831   35680 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:24.391851   35680 status.go:255] checking status of ha-533645-m03 ...
	I0723 14:21:24.392178   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.392236   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.407317   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0723 14:21:24.407724   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.408188   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.408209   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.408529   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.408727   35680 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:21:24.410333   35680 status.go:330] ha-533645-m03 host status = "Running" (err=<nil>)
	I0723 14:21:24.410350   35680 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:24.410717   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.410757   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.426134   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0723 14:21:24.426524   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.427004   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.427022   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.427393   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.427615   35680 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:21:24.430364   35680 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:24.430955   35680 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:24.430985   35680 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:24.431148   35680 host.go:66] Checking if "ha-533645-m03" exists ...
	I0723 14:21:24.431469   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.431511   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.446892   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0723 14:21:24.447312   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.447757   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.447781   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.448095   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.448289   35680 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:21:24.448503   35680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:24.448523   35680 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:21:24.451064   35680 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:24.451433   35680 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:24.451455   35680 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:24.451573   35680 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:21:24.451780   35680 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:21:24.451937   35680 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:21:24.452066   35680 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:21:24.530252   35680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:24.544321   35680 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:21:24.544348   35680 api_server.go:166] Checking apiserver status ...
	I0723 14:21:24.544386   35680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:21:24.557076   35680 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0723 14:21:24.565834   35680 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:21:24.565909   35680 ssh_runner.go:195] Run: ls
	I0723 14:21:24.569831   35680 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:21:24.573891   35680 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:21:24.573911   35680 status.go:422] ha-533645-m03 apiserver status = Running (err=<nil>)
	I0723 14:21:24.573919   35680 status.go:257] ha-533645-m03 status: &{Name:ha-533645-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:21:24.573933   35680 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:21:24.574220   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.574260   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.589649   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43519
	I0723 14:21:24.590107   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.590646   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.590670   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.590991   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.591195   35680 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:21:24.592693   35680 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:21:24.592709   35680 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:24.593071   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.593104   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.608547   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0723 14:21:24.609169   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.609690   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.609710   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.610054   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.610266   35680 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:21:24.613674   35680 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:24.614094   35680 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:24.614122   35680 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:24.614251   35680 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:21:24.614581   35680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:24.614616   35680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:24.629309   35680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
	I0723 14:21:24.629834   35680 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:24.630320   35680 main.go:141] libmachine: Using API Version  1
	I0723 14:21:24.630335   35680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:24.630692   35680 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:24.630903   35680 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:21:24.631124   35680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:21:24.631143   35680 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:21:24.633900   35680 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:24.634301   35680 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:24.634334   35680 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:24.634500   35680 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:21:24.634694   35680 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:21:24.634835   35680 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:21:24.634945   35680 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:21:24.713869   35680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:21:24.727710   35680 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-533645 -n ha-533645
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-533645 logs -n 25: (1.414807266s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645:/home/docker/cp-test_ha-533645-m03_ha-533645.txt                      |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645 sudo cat                                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645.txt                                |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m02:/home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m04 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp testdata/cp-test.txt                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645:/home/docker/cp-test_ha-533645-m04_ha-533645.txt                      |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645 sudo cat                                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645.txt                                |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m02:/home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03:/home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m03 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-533645 node stop m02 -v=7                                                    | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-533645 node start m02 -v=7                                                   | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:12:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:12:58.672274   29532 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:12:58.672396   29532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:58.672405   29532 out.go:304] Setting ErrFile to fd 2...
	I0723 14:12:58.672410   29532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:58.672592   29532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:12:58.673181   29532 out.go:298] Setting JSON to false
	I0723 14:12:58.674012   29532 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3325,"bootTime":1721740654,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:12:58.674070   29532 start.go:139] virtualization: kvm guest
	I0723 14:12:58.676433   29532 out.go:177] * [ha-533645] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 14:12:58.677903   29532 notify.go:220] Checking for updates...
	I0723 14:12:58.677916   29532 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:12:58.679517   29532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:12:58.680865   29532 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:12:58.682045   29532 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:58.683336   29532 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:12:58.684490   29532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:12:58.685826   29532 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:12:58.719886   29532 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 14:12:58.721256   29532 start.go:297] selected driver: kvm2
	I0723 14:12:58.721288   29532 start.go:901] validating driver "kvm2" against <nil>
	I0723 14:12:58.721309   29532 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:12:58.722079   29532 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:12:58.722169   29532 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 14:12:58.736944   29532 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 14:12:58.736992   29532 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 14:12:58.737216   29532 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:12:58.737300   29532 cni.go:84] Creating CNI manager for ""
	I0723 14:12:58.737313   29532 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0723 14:12:58.737320   29532 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 14:12:58.737371   29532 start.go:340] cluster config:
	{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0723 14:12:58.737466   29532 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:12:58.739356   29532 out.go:177] * Starting "ha-533645" primary control-plane node in "ha-533645" cluster
	I0723 14:12:58.740608   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:12:58.740643   29532 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 14:12:58.740661   29532 cache.go:56] Caching tarball of preloaded images
	I0723 14:12:58.740724   29532 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:12:58.740734   29532 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:12:58.741010   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:12:58.741028   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json: {Name:mk8b3be7d33f3876fb077f6ec49a9ae7625ff727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:12:58.741160   29532 start.go:360] acquireMachinesLock for ha-533645: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:12:58.741188   29532 start.go:364] duration metric: took 16.714µs to acquireMachinesLock for "ha-533645"
	I0723 14:12:58.741203   29532 start.go:93] Provisioning new machine with config: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:12:58.741258   29532 start.go:125] createHost starting for "" (driver="kvm2")
	I0723 14:12:58.742731   29532 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 14:12:58.742853   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:12:58.742885   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:12:58.757854   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0723 14:12:58.758290   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:12:58.758822   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:12:58.758845   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:12:58.759180   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:12:58.759420   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:12:58.759561   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:12:58.759673   29532 start.go:159] libmachine.API.Create for "ha-533645" (driver="kvm2")
	I0723 14:12:58.759702   29532 client.go:168] LocalClient.Create starting
	I0723 14:12:58.759736   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 14:12:58.759768   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:12:58.759790   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:12:58.759861   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 14:12:58.759884   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:12:58.759901   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:12:58.759936   29532 main.go:141] libmachine: Running pre-create checks...
	I0723 14:12:58.759948   29532 main.go:141] libmachine: (ha-533645) Calling .PreCreateCheck
	I0723 14:12:58.760329   29532 main.go:141] libmachine: (ha-533645) Calling .GetConfigRaw
	I0723 14:12:58.760736   29532 main.go:141] libmachine: Creating machine...
	I0723 14:12:58.760752   29532 main.go:141] libmachine: (ha-533645) Calling .Create
	I0723 14:12:58.760880   29532 main.go:141] libmachine: (ha-533645) Creating KVM machine...
	I0723 14:12:58.762130   29532 main.go:141] libmachine: (ha-533645) DBG | found existing default KVM network
	I0723 14:12:58.762820   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:58.762698   29555 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0723 14:12:58.762838   29532 main.go:141] libmachine: (ha-533645) DBG | created network xml: 
	I0723 14:12:58.762856   29532 main.go:141] libmachine: (ha-533645) DBG | <network>
	I0723 14:12:58.762867   29532 main.go:141] libmachine: (ha-533645) DBG |   <name>mk-ha-533645</name>
	I0723 14:12:58.762875   29532 main.go:141] libmachine: (ha-533645) DBG |   <dns enable='no'/>
	I0723 14:12:58.762882   29532 main.go:141] libmachine: (ha-533645) DBG |   
	I0723 14:12:58.762893   29532 main.go:141] libmachine: (ha-533645) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0723 14:12:58.762903   29532 main.go:141] libmachine: (ha-533645) DBG |     <dhcp>
	I0723 14:12:58.762929   29532 main.go:141] libmachine: (ha-533645) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0723 14:12:58.762947   29532 main.go:141] libmachine: (ha-533645) DBG |     </dhcp>
	I0723 14:12:58.762956   29532 main.go:141] libmachine: (ha-533645) DBG |   </ip>
	I0723 14:12:58.762970   29532 main.go:141] libmachine: (ha-533645) DBG |   
	I0723 14:12:58.762997   29532 main.go:141] libmachine: (ha-533645) DBG | </network>
	I0723 14:12:58.763014   29532 main.go:141] libmachine: (ha-533645) DBG | 
	I0723 14:12:58.767859   29532 main.go:141] libmachine: (ha-533645) DBG | trying to create private KVM network mk-ha-533645 192.168.39.0/24...
	I0723 14:12:58.841046   29532 main.go:141] libmachine: (ha-533645) DBG | private KVM network mk-ha-533645 192.168.39.0/24 created
	I0723 14:12:58.841154   29532 main.go:141] libmachine: (ha-533645) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645 ...
	I0723 14:12:58.841168   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:58.841006   29555 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:58.841179   29532 main.go:141] libmachine: (ha-533645) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 14:12:58.841267   29532 main.go:141] libmachine: (ha-533645) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 14:12:59.077944   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:59.077811   29555 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa...
	I0723 14:12:59.183323   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:59.183169   29555 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/ha-533645.rawdisk...
	I0723 14:12:59.183379   29532 main.go:141] libmachine: (ha-533645) DBG | Writing magic tar header
	I0723 14:12:59.183404   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645 (perms=drwx------)
	I0723 14:12:59.183432   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 14:12:59.183440   29532 main.go:141] libmachine: (ha-533645) DBG | Writing SSH key tar header
	I0723 14:12:59.183452   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:12:59.183278   29555 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645 ...
	I0723 14:12:59.183459   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645
	I0723 14:12:59.183467   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 14:12:59.183474   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:59.183484   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 14:12:59.183491   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 14:12:59.183501   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home/jenkins
	I0723 14:12:59.183513   29532 main.go:141] libmachine: (ha-533645) DBG | Checking permissions on dir: /home
	I0723 14:12:59.183524   29532 main.go:141] libmachine: (ha-533645) DBG | Skipping /home - not owner
	I0723 14:12:59.183535   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 14:12:59.183549   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 14:12:59.183558   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 14:12:59.183595   29532 main.go:141] libmachine: (ha-533645) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 14:12:59.183620   29532 main.go:141] libmachine: (ha-533645) Creating domain...
	I0723 14:12:59.184578   29532 main.go:141] libmachine: (ha-533645) define libvirt domain using xml: 
	I0723 14:12:59.184594   29532 main.go:141] libmachine: (ha-533645) <domain type='kvm'>
	I0723 14:12:59.184601   29532 main.go:141] libmachine: (ha-533645)   <name>ha-533645</name>
	I0723 14:12:59.184609   29532 main.go:141] libmachine: (ha-533645)   <memory unit='MiB'>2200</memory>
	I0723 14:12:59.184624   29532 main.go:141] libmachine: (ha-533645)   <vcpu>2</vcpu>
	I0723 14:12:59.184634   29532 main.go:141] libmachine: (ha-533645)   <features>
	I0723 14:12:59.184641   29532 main.go:141] libmachine: (ha-533645)     <acpi/>
	I0723 14:12:59.184646   29532 main.go:141] libmachine: (ha-533645)     <apic/>
	I0723 14:12:59.184658   29532 main.go:141] libmachine: (ha-533645)     <pae/>
	I0723 14:12:59.184672   29532 main.go:141] libmachine: (ha-533645)     
	I0723 14:12:59.184680   29532 main.go:141] libmachine: (ha-533645)   </features>
	I0723 14:12:59.184688   29532 main.go:141] libmachine: (ha-533645)   <cpu mode='host-passthrough'>
	I0723 14:12:59.184706   29532 main.go:141] libmachine: (ha-533645)   
	I0723 14:12:59.184730   29532 main.go:141] libmachine: (ha-533645)   </cpu>
	I0723 14:12:59.184738   29532 main.go:141] libmachine: (ha-533645)   <os>
	I0723 14:12:59.184743   29532 main.go:141] libmachine: (ha-533645)     <type>hvm</type>
	I0723 14:12:59.184753   29532 main.go:141] libmachine: (ha-533645)     <boot dev='cdrom'/>
	I0723 14:12:59.184759   29532 main.go:141] libmachine: (ha-533645)     <boot dev='hd'/>
	I0723 14:12:59.184765   29532 main.go:141] libmachine: (ha-533645)     <bootmenu enable='no'/>
	I0723 14:12:59.184770   29532 main.go:141] libmachine: (ha-533645)   </os>
	I0723 14:12:59.184776   29532 main.go:141] libmachine: (ha-533645)   <devices>
	I0723 14:12:59.184782   29532 main.go:141] libmachine: (ha-533645)     <disk type='file' device='cdrom'>
	I0723 14:12:59.184790   29532 main.go:141] libmachine: (ha-533645)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/boot2docker.iso'/>
	I0723 14:12:59.184796   29532 main.go:141] libmachine: (ha-533645)       <target dev='hdc' bus='scsi'/>
	I0723 14:12:59.184814   29532 main.go:141] libmachine: (ha-533645)       <readonly/>
	I0723 14:12:59.184832   29532 main.go:141] libmachine: (ha-533645)     </disk>
	I0723 14:12:59.184845   29532 main.go:141] libmachine: (ha-533645)     <disk type='file' device='disk'>
	I0723 14:12:59.184857   29532 main.go:141] libmachine: (ha-533645)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 14:12:59.184872   29532 main.go:141] libmachine: (ha-533645)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/ha-533645.rawdisk'/>
	I0723 14:12:59.184884   29532 main.go:141] libmachine: (ha-533645)       <target dev='hda' bus='virtio'/>
	I0723 14:12:59.184895   29532 main.go:141] libmachine: (ha-533645)     </disk>
	I0723 14:12:59.184908   29532 main.go:141] libmachine: (ha-533645)     <interface type='network'>
	I0723 14:12:59.184921   29532 main.go:141] libmachine: (ha-533645)       <source network='mk-ha-533645'/>
	I0723 14:12:59.184931   29532 main.go:141] libmachine: (ha-533645)       <model type='virtio'/>
	I0723 14:12:59.184942   29532 main.go:141] libmachine: (ha-533645)     </interface>
	I0723 14:12:59.184951   29532 main.go:141] libmachine: (ha-533645)     <interface type='network'>
	I0723 14:12:59.184963   29532 main.go:141] libmachine: (ha-533645)       <source network='default'/>
	I0723 14:12:59.184974   29532 main.go:141] libmachine: (ha-533645)       <model type='virtio'/>
	I0723 14:12:59.184990   29532 main.go:141] libmachine: (ha-533645)     </interface>
	I0723 14:12:59.185000   29532 main.go:141] libmachine: (ha-533645)     <serial type='pty'>
	I0723 14:12:59.185011   29532 main.go:141] libmachine: (ha-533645)       <target port='0'/>
	I0723 14:12:59.185019   29532 main.go:141] libmachine: (ha-533645)     </serial>
	I0723 14:12:59.185030   29532 main.go:141] libmachine: (ha-533645)     <console type='pty'>
	I0723 14:12:59.185044   29532 main.go:141] libmachine: (ha-533645)       <target type='serial' port='0'/>
	I0723 14:12:59.185072   29532 main.go:141] libmachine: (ha-533645)     </console>
	I0723 14:12:59.185081   29532 main.go:141] libmachine: (ha-533645)     <rng model='virtio'>
	I0723 14:12:59.185094   29532 main.go:141] libmachine: (ha-533645)       <backend model='random'>/dev/random</backend>
	I0723 14:12:59.185102   29532 main.go:141] libmachine: (ha-533645)     </rng>
	I0723 14:12:59.185118   29532 main.go:141] libmachine: (ha-533645)     
	I0723 14:12:59.185128   29532 main.go:141] libmachine: (ha-533645)     
	I0723 14:12:59.185139   29532 main.go:141] libmachine: (ha-533645)   </devices>
	I0723 14:12:59.185145   29532 main.go:141] libmachine: (ha-533645) </domain>
	I0723 14:12:59.185152   29532 main.go:141] libmachine: (ha-533645) 
	I0723 14:12:59.189460   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:d7:33:3e in network default
	I0723 14:12:59.189915   29532 main.go:141] libmachine: (ha-533645) Ensuring networks are active...
	I0723 14:12:59.189930   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:12:59.190515   29532 main.go:141] libmachine: (ha-533645) Ensuring network default is active
	I0723 14:12:59.190817   29532 main.go:141] libmachine: (ha-533645) Ensuring network mk-ha-533645 is active
	I0723 14:12:59.191444   29532 main.go:141] libmachine: (ha-533645) Getting domain xml...
	I0723 14:12:59.192254   29532 main.go:141] libmachine: (ha-533645) Creating domain...
	I0723 14:13:00.367182   29532 main.go:141] libmachine: (ha-533645) Waiting to get IP...
	I0723 14:13:00.367830   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:00.368289   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:00.368309   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:00.368263   29555 retry.go:31] will retry after 233.748173ms: waiting for machine to come up
	I0723 14:13:00.603785   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:00.604342   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:00.604373   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:00.604301   29555 retry.go:31] will retry after 286.19202ms: waiting for machine to come up
	I0723 14:13:00.891818   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:00.892280   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:00.892312   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:00.892242   29555 retry.go:31] will retry after 451.009456ms: waiting for machine to come up
	I0723 14:13:01.344946   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:01.345381   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:01.345407   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:01.345355   29555 retry.go:31] will retry after 553.896723ms: waiting for machine to come up
	I0723 14:13:01.901183   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:01.901698   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:01.901726   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:01.901658   29555 retry.go:31] will retry after 573.029693ms: waiting for machine to come up
	I0723 14:13:02.476534   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:02.476957   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:02.476983   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:02.476912   29555 retry.go:31] will retry after 687.916409ms: waiting for machine to come up
	I0723 14:13:03.166977   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:03.167398   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:03.167425   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:03.167351   29555 retry.go:31] will retry after 1.032404149s: waiting for machine to come up
	I0723 14:13:04.201178   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:04.202182   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:04.202205   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:04.202127   29555 retry.go:31] will retry after 1.12337681s: waiting for machine to come up
	I0723 14:13:05.326795   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:05.327203   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:05.327232   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:05.327158   29555 retry.go:31] will retry after 1.320525567s: waiting for machine to come up
	I0723 14:13:06.649527   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:06.649867   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:06.649886   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:06.649819   29555 retry.go:31] will retry after 2.047276994s: waiting for machine to come up
	I0723 14:13:08.699610   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:08.700095   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:08.700128   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:08.700043   29555 retry.go:31] will retry after 2.504888725s: waiting for machine to come up
	I0723 14:13:11.208286   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:11.208682   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:11.208711   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:11.208634   29555 retry.go:31] will retry after 3.516838711s: waiting for machine to come up
	I0723 14:13:14.727069   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:14.727433   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find current IP address of domain ha-533645 in network mk-ha-533645
	I0723 14:13:14.727466   29532 main.go:141] libmachine: (ha-533645) DBG | I0723 14:13:14.727385   29555 retry.go:31] will retry after 3.819451455s: waiting for machine to come up
	I0723 14:13:18.551305   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.551720   29532 main.go:141] libmachine: (ha-533645) Found IP for machine: 192.168.39.103
	I0723 14:13:18.551742   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has current primary IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.551749   29532 main.go:141] libmachine: (ha-533645) Reserving static IP address...
	I0723 14:13:18.552061   29532 main.go:141] libmachine: (ha-533645) DBG | unable to find host DHCP lease matching {name: "ha-533645", mac: "52:54:00:a6:b1:de", ip: "192.168.39.103"} in network mk-ha-533645
	I0723 14:13:18.623653   29532 main.go:141] libmachine: (ha-533645) DBG | Getting to WaitForSSH function...
	I0723 14:13:18.623681   29532 main.go:141] libmachine: (ha-533645) Reserved static IP address: 192.168.39.103
	I0723 14:13:18.623695   29532 main.go:141] libmachine: (ha-533645) Waiting for SSH to be available...
	I0723 14:13:18.625925   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.626286   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.626312   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.626489   29532 main.go:141] libmachine: (ha-533645) DBG | Using SSH client type: external
	I0723 14:13:18.626516   29532 main.go:141] libmachine: (ha-533645) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa (-rw-------)
	I0723 14:13:18.626553   29532 main.go:141] libmachine: (ha-533645) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 14:13:18.626567   29532 main.go:141] libmachine: (ha-533645) DBG | About to run SSH command:
	I0723 14:13:18.626581   29532 main.go:141] libmachine: (ha-533645) DBG | exit 0
	I0723 14:13:18.758328   29532 main.go:141] libmachine: (ha-533645) DBG | SSH cmd err, output: <nil>: 
	I0723 14:13:18.758648   29532 main.go:141] libmachine: (ha-533645) KVM machine creation complete!
	I0723 14:13:18.758933   29532 main.go:141] libmachine: (ha-533645) Calling .GetConfigRaw
	I0723 14:13:18.759466   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:18.759692   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:18.759871   29532 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 14:13:18.759909   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:18.761057   29532 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 14:13:18.761075   29532 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 14:13:18.761091   29532 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 14:13:18.761100   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:18.763501   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.763869   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.763888   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.764092   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:18.764245   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.764400   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.764550   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:18.764771   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:18.764959   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:18.764969   29532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 14:13:18.877574   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:13:18.877597   29532 main.go:141] libmachine: Detecting the provisioner...
	I0723 14:13:18.877605   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:18.880535   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.880887   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.880928   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.881069   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:18.881257   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.881431   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.881604   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:18.881789   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:18.881963   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:18.881974   29532 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 14:13:18.994799   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 14:13:18.994882   29532 main.go:141] libmachine: found compatible host: buildroot
	I0723 14:13:18.994896   29532 main.go:141] libmachine: Provisioning with buildroot...
	I0723 14:13:18.994907   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:13:18.995128   29532 buildroot.go:166] provisioning hostname "ha-533645"
	I0723 14:13:18.995160   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:13:18.995337   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:18.997998   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.998333   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:18.998360   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:18.998527   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:18.998682   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.998814   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:18.998906   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:18.999009   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:18.999221   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:18.999235   29532 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645 && echo "ha-533645" | sudo tee /etc/hostname
	I0723 14:13:19.128271   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645
	
	I0723 14:13:19.128312   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.130983   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.131417   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.131446   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.131603   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:19.131806   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.131959   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.132097   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:19.132278   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:19.132491   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:19.132509   29532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:13:19.256881   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:13:19.256908   29532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:13:19.256958   29532 buildroot.go:174] setting up certificates
	I0723 14:13:19.256970   29532 provision.go:84] configureAuth start
	I0723 14:13:19.256980   29532 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:13:19.257259   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:19.260049   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.260464   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.260489   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.260631   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.262752   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.263139   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.263164   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.263292   29532 provision.go:143] copyHostCerts
	I0723 14:13:19.263352   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:13:19.263398   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:13:19.263410   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:13:19.263486   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:13:19.263596   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:13:19.263624   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:13:19.263632   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:13:19.263675   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:13:19.263737   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:13:19.263760   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:13:19.263768   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:13:19.263799   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:13:19.263868   29532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645 san=[127.0.0.1 192.168.39.103 ha-533645 localhost minikube]
	I0723 14:13:19.813421   29532 provision.go:177] copyRemoteCerts
	I0723 14:13:19.813491   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:13:19.813515   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.816359   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.816799   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.816826   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.817027   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:19.817246   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.817440   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:19.817562   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:19.904061   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:13:19.904138   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:13:19.927488   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:13:19.927553   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0723 14:13:19.949586   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:13:19.949646   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 14:13:19.971467   29532 provision.go:87] duration metric: took 714.485733ms to configureAuth
	I0723 14:13:19.971489   29532 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:13:19.971682   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:13:19.971751   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:19.974778   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.975112   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:19.975155   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:19.975291   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:19.975522   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.975706   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:19.975830   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:19.976045   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:19.976223   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:19.976242   29532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:13:20.253784   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:13:20.253809   29532 main.go:141] libmachine: Checking connection to Docker...
	I0723 14:13:20.253818   29532 main.go:141] libmachine: (ha-533645) Calling .GetURL
	I0723 14:13:20.255041   29532 main.go:141] libmachine: (ha-533645) DBG | Using libvirt version 6000000
	I0723 14:13:20.257204   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.257568   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.257603   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.257770   29532 main.go:141] libmachine: Docker is up and running!
	I0723 14:13:20.257785   29532 main.go:141] libmachine: Reticulating splines...
	I0723 14:13:20.257792   29532 client.go:171] duration metric: took 21.498079198s to LocalClient.Create
	I0723 14:13:20.257819   29532 start.go:167] duration metric: took 21.49814807s to libmachine.API.Create "ha-533645"
	I0723 14:13:20.257827   29532 start.go:293] postStartSetup for "ha-533645" (driver="kvm2")
	I0723 14:13:20.257836   29532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:13:20.257851   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.258057   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:13:20.258078   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.260109   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.260423   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.260441   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.260489   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.260632   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.260757   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.260893   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:20.349477   29532 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:13:20.353478   29532 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:13:20.353498   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:13:20.353581   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:13:20.353670   29532 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:13:20.353681   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:13:20.353787   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:13:20.363689   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:13:20.386551   29532 start.go:296] duration metric: took 128.692671ms for postStartSetup
	I0723 14:13:20.386625   29532 main.go:141] libmachine: (ha-533645) Calling .GetConfigRaw
	I0723 14:13:20.387156   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:20.389939   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.390372   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.390419   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.390644   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:13:20.390824   29532 start.go:128] duration metric: took 21.649555719s to createHost
	I0723 14:13:20.390846   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.393022   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.393337   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.393368   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.393515   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.393711   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.393892   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.394045   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.394236   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:13:20.394426   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:13:20.394441   29532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:13:20.506831   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744000.482293360
	
	I0723 14:13:20.506853   29532 fix.go:216] guest clock: 1721744000.482293360
	I0723 14:13:20.506865   29532 fix.go:229] Guest: 2024-07-23 14:13:20.48229336 +0000 UTC Remote: 2024-07-23 14:13:20.390836223 +0000 UTC m=+21.751704249 (delta=91.457137ms)
	I0723 14:13:20.506915   29532 fix.go:200] guest clock delta is within tolerance: 91.457137ms
	I0723 14:13:20.506923   29532 start.go:83] releasing machines lock for "ha-533645", held for 21.76572613s
	I0723 14:13:20.506949   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.507189   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:20.509580   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.509983   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.510015   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.510240   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.510782   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.510956   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:20.511028   29532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:13:20.511087   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.511186   29532 ssh_runner.go:195] Run: cat /version.json
	I0723 14:13:20.511210   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:20.513410   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.513685   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.513710   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.513796   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.513888   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.514054   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.514197   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.514227   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:20.514272   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:20.514308   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:20.514395   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:20.514580   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:20.514730   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:20.514864   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:20.595339   29532 ssh_runner.go:195] Run: systemctl --version
	I0723 14:13:20.628867   29532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:13:20.784107   29532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:13:20.789943   29532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:13:20.790008   29532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:13:20.805053   29532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 14:13:20.805072   29532 start.go:495] detecting cgroup driver to use...
	I0723 14:13:20.805139   29532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:13:20.820000   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:13:20.832376   29532 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:13:20.832438   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:13:20.845699   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:13:20.858830   29532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:13:20.972567   29532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:13:21.107566   29532 docker.go:233] disabling docker service ...
	I0723 14:13:21.107632   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:13:21.121555   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:13:21.134136   29532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:13:21.262624   29532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:13:21.391783   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:13:21.404689   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:13:21.421455   29532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:13:21.421518   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.431023   29532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:13:21.431075   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.440711   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.450208   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.459592   29532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:13:21.469581   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.479380   29532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.495105   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:13:21.504735   29532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:13:21.513466   29532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 14:13:21.513514   29532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 14:13:21.526071   29532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:13:21.534984   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:13:21.641089   29532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:13:21.773861   29532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:13:21.773940   29532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:13:21.778588   29532 start.go:563] Will wait 60s for crictl version
	I0723 14:13:21.778652   29532 ssh_runner.go:195] Run: which crictl
	I0723 14:13:21.782156   29532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:13:21.819340   29532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:13:21.819414   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:13:21.850001   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:13:21.878625   29532 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:13:21.880044   29532 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:13:21.883002   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:21.883375   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:21.883407   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:21.883591   29532 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:13:21.887590   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:13:21.900122   29532 kubeadm.go:883] updating cluster {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:13:21.900247   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:13:21.900324   29532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:13:21.932197   29532 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 14:13:21.932271   29532 ssh_runner.go:195] Run: which lz4
	I0723 14:13:21.935844   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0723 14:13:21.935943   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 14:13:21.939680   29532 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 14:13:21.939714   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 14:13:23.123318   29532 crio.go:462] duration metric: took 1.187404654s to copy over tarball
	I0723 14:13:23.123381   29532 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 14:13:25.188987   29532 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06558121s)
	I0723 14:13:25.189014   29532 crio.go:469] duration metric: took 2.065669362s to extract the tarball
	I0723 14:13:25.189023   29532 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 14:13:25.225220   29532 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:13:25.266110   29532 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:13:25.266131   29532 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:13:25.266141   29532 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.30.3 crio true true} ...
	I0723 14:13:25.266252   29532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:13:25.266331   29532 ssh_runner.go:195] Run: crio config
	I0723 14:13:25.313634   29532 cni.go:84] Creating CNI manager for ""
	I0723 14:13:25.313655   29532 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0723 14:13:25.313664   29532 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:13:25.313685   29532 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-533645 NodeName:ha-533645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:13:25.313815   29532 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-533645"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:13:25.313836   29532 kube-vip.go:115] generating kube-vip config ...
	I0723 14:13:25.313875   29532 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:13:25.328705   29532 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:13:25.328808   29532 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:13:25.328861   29532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:13:25.337965   29532 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:13:25.338025   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0723 14:13:25.346714   29532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0723 14:13:25.361425   29532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:13:25.375921   29532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0723 14:13:25.391070   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0723 14:13:25.405958   29532 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:13:25.409434   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:13:25.420629   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:13:25.547842   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:13:25.564142   29532 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.103
	I0723 14:13:25.564165   29532 certs.go:194] generating shared ca certs ...
	I0723 14:13:25.564184   29532 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:25.564334   29532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:13:25.564399   29532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:13:25.564413   29532 certs.go:256] generating profile certs ...
	I0723 14:13:25.564476   29532 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:13:25.564493   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt with IP's: []
	I0723 14:13:25.700047   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt ...
	I0723 14:13:25.700087   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt: {Name:mkdba522527eda92ff71cd385739078b14c4da31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:25.700291   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key ...
	I0723 14:13:25.700306   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key: {Name:mk57a69bd0df653423e3606733f06b485248df4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:25.700421   29532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19
	I0723 14:13:25.700450   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.254]
	I0723 14:13:26.126470   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19 ...
	I0723 14:13:26.126520   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19: {Name:mka663770b2d6e465e2b11b311dd3ec7a6e75761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.126726   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19 ...
	I0723 14:13:26.126747   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19: {Name:mk89e7bb911a6fd02eb0dfe171c83292d64d8626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.126852   29532 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.f8d96a19 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:13:26.126945   29532 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.f8d96a19 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:13:26.127003   29532 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:13:26.127020   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt with IP's: []
	I0723 14:13:26.185627   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt ...
	I0723 14:13:26.185657   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt: {Name:mk0404f7330cbad6dd18ebcf21636895af066fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.185836   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key ...
	I0723 14:13:26.185849   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key: {Name:mk5e155bbb1610feeadaca4f2dff9a332eedfeec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:26.185939   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:13:26.185958   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:13:26.185969   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:13:26.185980   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:13:26.185989   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:13:26.185999   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:13:26.186008   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:13:26.186016   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:13:26.186063   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:13:26.186100   29532 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:13:26.186109   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:13:26.186129   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:13:26.186151   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:13:26.186172   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:13:26.186208   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:13:26.186233   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.186248   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.186260   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.186750   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:13:26.210325   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:13:26.241496   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:13:26.265608   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:13:26.288171   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 14:13:26.313195   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:13:26.334438   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:13:26.355720   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:13:26.376950   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:13:26.398642   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:13:26.419904   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:13:26.441271   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:13:26.456295   29532 ssh_runner.go:195] Run: openssl version
	I0723 14:13:26.461735   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:13:26.471578   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.475622   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.475682   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:13:26.481057   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:13:26.490898   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:13:26.501742   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.505963   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.506010   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:13:26.511324   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:13:26.521234   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:13:26.531053   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.534968   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.535016   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:13:26.540072   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:13:26.549832   29532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:13:26.553382   29532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:13:26.553440   29532 kubeadm.go:392] StartCluster: {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:13:26.553509   29532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:13:26.553572   29532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:13:26.591209   29532 cri.go:89] found id: ""
	I0723 14:13:26.591290   29532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 14:13:26.600612   29532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 14:13:26.609774   29532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 14:13:26.618689   29532 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 14:13:26.618706   29532 kubeadm.go:157] found existing configuration files:
	
	I0723 14:13:26.618748   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 14:13:26.627038   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 14:13:26.627085   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 14:13:26.635723   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 14:13:26.643970   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 14:13:26.644025   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 14:13:26.652491   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 14:13:26.660797   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 14:13:26.660839   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 14:13:26.669068   29532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 14:13:26.676901   29532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 14:13:26.676951   29532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 14:13:26.685421   29532 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 14:13:26.789759   29532 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 14:13:26.789852   29532 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 14:13:26.900787   29532 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 14:13:26.900881   29532 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 14:13:26.900970   29532 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 14:13:27.091115   29532 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 14:13:27.203393   29532 out.go:204]   - Generating certificates and keys ...
	I0723 14:13:27.203560   29532 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 14:13:27.203643   29532 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 14:13:27.395577   29532 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 14:13:27.650739   29532 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 14:13:27.745494   29532 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 14:13:27.944713   29532 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 14:13:28.063008   29532 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 14:13:28.063169   29532 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-533645 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0723 14:13:28.209317   29532 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 14:13:28.209435   29532 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-533645 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I0723 14:13:28.283585   29532 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 14:13:28.432664   29532 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 14:13:28.562553   29532 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 14:13:28.562811   29532 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 14:13:28.732219   29532 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 14:13:28.812903   29532 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 14:13:28.892698   29532 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 14:13:28.971458   29532 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 14:13:29.155999   29532 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 14:13:29.156504   29532 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 14:13:29.159037   29532 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 14:13:29.160699   29532 out.go:204]   - Booting up control plane ...
	I0723 14:13:29.160829   29532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 14:13:29.160932   29532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 14:13:29.161386   29532 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 14:13:29.182816   29532 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 14:13:29.183752   29532 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 14:13:29.183838   29532 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 14:13:29.304883   29532 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 14:13:29.305002   29532 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 14:13:30.305295   29532 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001177191s
	I0723 14:13:30.305434   29532 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 14:13:36.075937   29532 kubeadm.go:310] [api-check] The API server is healthy after 5.773933875s
	I0723 14:13:36.089267   29532 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 14:13:36.106915   29532 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 14:13:36.139368   29532 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 14:13:36.139600   29532 kubeadm.go:310] [mark-control-plane] Marking the node ha-533645 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 14:13:36.151222   29532 kubeadm.go:310] [bootstrap-token] Using token: r8wrz6.fvv9w307l0rufqz8
	I0723 14:13:36.152654   29532 out.go:204]   - Configuring RBAC rules ...
	I0723 14:13:36.152802   29532 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 14:13:36.162332   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 14:13:36.169997   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 14:13:36.173840   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 14:13:36.180187   29532 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 14:13:36.184136   29532 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 14:13:36.485391   29532 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 14:13:36.923963   29532 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 14:13:37.486074   29532 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 14:13:37.487129   29532 kubeadm.go:310] 
	I0723 14:13:37.487198   29532 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 14:13:37.487211   29532 kubeadm.go:310] 
	I0723 14:13:37.487280   29532 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 14:13:37.487287   29532 kubeadm.go:310] 
	I0723 14:13:37.487355   29532 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 14:13:37.487433   29532 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 14:13:37.487486   29532 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 14:13:37.487492   29532 kubeadm.go:310] 
	I0723 14:13:37.487539   29532 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 14:13:37.487546   29532 kubeadm.go:310] 
	I0723 14:13:37.487584   29532 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 14:13:37.487590   29532 kubeadm.go:310] 
	I0723 14:13:37.487631   29532 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 14:13:37.487697   29532 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 14:13:37.487778   29532 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 14:13:37.487797   29532 kubeadm.go:310] 
	I0723 14:13:37.487909   29532 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 14:13:37.488010   29532 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 14:13:37.488021   29532 kubeadm.go:310] 
	I0723 14:13:37.488119   29532 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r8wrz6.fvv9w307l0rufqz8 \
	I0723 14:13:37.488213   29532 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 14:13:37.488236   29532 kubeadm.go:310] 	--control-plane 
	I0723 14:13:37.488242   29532 kubeadm.go:310] 
	I0723 14:13:37.488319   29532 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 14:13:37.488326   29532 kubeadm.go:310] 
	I0723 14:13:37.488398   29532 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r8wrz6.fvv9w307l0rufqz8 \
	I0723 14:13:37.488487   29532 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 14:13:37.489160   29532 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 14:13:37.489185   29532 cni.go:84] Creating CNI manager for ""
	I0723 14:13:37.489195   29532 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0723 14:13:37.491817   29532 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0723 14:13:37.493187   29532 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0723 14:13:37.498133   29532 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0723 14:13:37.498151   29532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0723 14:13:37.517082   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0723 14:13:37.872313   29532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 14:13:37.872462   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-533645 minikube.k8s.io/updated_at=2024_07_23T14_13_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=ha-533645 minikube.k8s.io/primary=true
	I0723 14:13:37.872466   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:37.900159   29532 ops.go:34] apiserver oom_adj: -16
	I0723 14:13:38.047799   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:38.548348   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:39.047929   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:39.548745   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:40.048202   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:40.548107   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:41.048459   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:41.548430   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:42.048252   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:42.548693   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:43.048154   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:43.548461   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:44.048672   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:44.547962   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:45.048566   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:45.548044   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:46.048781   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:46.548869   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:47.047972   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:47.548625   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:48.048437   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:48.548297   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:49.048503   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:49.547915   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:13:49.629806   29532 kubeadm.go:1113] duration metric: took 11.757413084s to wait for elevateKubeSystemPrivileges
	I0723 14:13:49.629847   29532 kubeadm.go:394] duration metric: took 23.076409381s to StartCluster
	I0723 14:13:49.629870   29532 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:49.629959   29532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:13:49.630823   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:13:49.631055   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0723 14:13:49.631067   29532 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 14:13:49.631127   29532 addons.go:69] Setting storage-provisioner=true in profile "ha-533645"
	I0723 14:13:49.631140   29532 addons.go:69] Setting default-storageclass=true in profile "ha-533645"
	I0723 14:13:49.631158   29532 addons.go:234] Setting addon storage-provisioner=true in "ha-533645"
	I0723 14:13:49.631186   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:13:49.631053   29532 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:13:49.631299   29532 start.go:241] waiting for startup goroutines ...
	I0723 14:13:49.631185   29532 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-533645"
	I0723 14:13:49.631277   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:13:49.631603   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.631638   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.631661   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.631687   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.646860   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0723 14:13:49.646873   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I0723 14:13:49.647284   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.647345   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.647824   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.647840   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.648000   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.648024   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.648296   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.648338   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.648459   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:49.648863   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.648891   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.650782   29532 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:13:49.651118   29532 kapi.go:59] client config for ha-533645: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt", KeyFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key", CAFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0723 14:13:49.651675   29532 cert_rotation.go:137] Starting client certificate rotation controller
	I0723 14:13:49.651919   29532 addons.go:234] Setting addon default-storageclass=true in "ha-533645"
	I0723 14:13:49.651970   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:13:49.652341   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.652379   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.664300   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
	I0723 14:13:49.664780   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.665339   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.665363   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.665744   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.666023   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:49.667762   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0723 14:13:49.667903   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:49.668110   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.668552   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.668575   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.669003   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.669465   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:49.669487   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:49.669767   29532 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 14:13:49.671199   29532 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:13:49.671213   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 14:13:49.671225   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:49.674005   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.674363   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:49.674402   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.674639   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:49.674837   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:49.675004   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:49.675164   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:49.684886   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0723 14:13:49.685571   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:49.686108   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:49.686127   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:49.686479   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:49.686728   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:13:49.688380   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:13:49.688613   29532 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 14:13:49.688632   29532 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 14:13:49.688651   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:13:49.691491   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.691911   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:13:49.691938   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:13:49.692072   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:13:49.692258   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:13:49.692395   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:13:49.692576   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:13:49.801260   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0723 14:13:49.812232   29532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:13:49.856436   29532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 14:13:50.350278   29532 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0723 14:13:50.557552   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.557580   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.557586   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.557602   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.557891   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.557924   29532 main.go:141] libmachine: (ha-533645) DBG | Closing plugin on server side
	I0723 14:13:50.557946   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.557958   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.557966   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.557921   29532 main.go:141] libmachine: (ha-533645) DBG | Closing plugin on server side
	I0723 14:13:50.557929   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.558014   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.558025   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.558034   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.558194   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.558210   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.558234   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.558247   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.558324   29532 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0723 14:13:50.558331   29532 round_trippers.go:469] Request Headers:
	I0723 14:13:50.558341   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:13:50.558350   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:13:50.571348   29532 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0723 14:13:50.571850   29532 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0723 14:13:50.571866   29532 round_trippers.go:469] Request Headers:
	I0723 14:13:50.571873   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:13:50.571877   29532 round_trippers.go:473]     Content-Type: application/json
	I0723 14:13:50.571881   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:13:50.574245   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:13:50.574374   29532 main.go:141] libmachine: Making call to close driver server
	I0723 14:13:50.574400   29532 main.go:141] libmachine: (ha-533645) Calling .Close
	I0723 14:13:50.574662   29532 main.go:141] libmachine: Successfully made call to close driver server
	I0723 14:13:50.574678   29532 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 14:13:50.574679   29532 main.go:141] libmachine: (ha-533645) DBG | Closing plugin on server side
	I0723 14:13:50.576415   29532 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0723 14:13:50.577704   29532 addons.go:510] duration metric: took 946.631825ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0723 14:13:50.577738   29532 start.go:246] waiting for cluster config update ...
	I0723 14:13:50.577753   29532 start.go:255] writing updated cluster config ...
	I0723 14:13:50.579300   29532 out.go:177] 
	I0723 14:13:50.580697   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:13:50.580759   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:13:50.582326   29532 out.go:177] * Starting "ha-533645-m02" control-plane node in "ha-533645" cluster
	I0723 14:13:50.583632   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:13:50.583658   29532 cache.go:56] Caching tarball of preloaded images
	I0723 14:13:50.583743   29532 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:13:50.583754   29532 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:13:50.583809   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:13:50.583954   29532 start.go:360] acquireMachinesLock for ha-533645-m02: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:13:50.583991   29532 start.go:364] duration metric: took 20.534µs to acquireMachinesLock for "ha-533645-m02"
	I0723 14:13:50.584006   29532 start.go:93] Provisioning new machine with config: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:13:50.584071   29532 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0723 14:13:50.585666   29532 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 14:13:50.585738   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:13:50.585763   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:13:50.600326   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0723 14:13:50.600701   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:13:50.601155   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:13:50.601174   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:13:50.601486   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:13:50.601727   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:13:50.601908   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:13:50.602159   29532 start.go:159] libmachine.API.Create for "ha-533645" (driver="kvm2")
	I0723 14:13:50.602185   29532 client.go:168] LocalClient.Create starting
	I0723 14:13:50.602223   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 14:13:50.602293   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:13:50.602316   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:13:50.602403   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 14:13:50.602436   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:13:50.602450   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:13:50.602477   29532 main.go:141] libmachine: Running pre-create checks...
	I0723 14:13:50.602491   29532 main.go:141] libmachine: (ha-533645-m02) Calling .PreCreateCheck
	I0723 14:13:50.602678   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetConfigRaw
	I0723 14:13:50.603159   29532 main.go:141] libmachine: Creating machine...
	I0723 14:13:50.603177   29532 main.go:141] libmachine: (ha-533645-m02) Calling .Create
	I0723 14:13:50.603303   29532 main.go:141] libmachine: (ha-533645-m02) Creating KVM machine...
	I0723 14:13:50.604684   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found existing default KVM network
	I0723 14:13:50.604792   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found existing private KVM network mk-ha-533645
	I0723 14:13:50.604913   29532 main.go:141] libmachine: (ha-533645-m02) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02 ...
	I0723 14:13:50.604942   29532 main.go:141] libmachine: (ha-533645-m02) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 14:13:50.605005   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:50.604919   29960 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:13:50.605147   29532 main.go:141] libmachine: (ha-533645-m02) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 14:13:50.847352   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:50.847207   29960 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa...
	I0723 14:13:51.162927   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:51.162819   29960 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/ha-533645-m02.rawdisk...
	I0723 14:13:51.162960   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Writing magic tar header
	I0723 14:13:51.162971   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Writing SSH key tar header
	I0723 14:13:51.162983   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:51.162934   29960 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02 ...
	I0723 14:13:51.163125   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02
	I0723 14:13:51.163143   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02 (perms=drwx------)
	I0723 14:13:51.163151   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 14:13:51.163162   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:13:51.163172   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 14:13:51.163184   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 14:13:51.163194   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home/jenkins
	I0723 14:13:51.163203   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Checking permissions on dir: /home
	I0723 14:13:51.163215   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Skipping /home - not owner
	I0723 14:13:51.163226   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 14:13:51.163237   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 14:13:51.163244   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 14:13:51.163257   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 14:13:51.163270   29532 main.go:141] libmachine: (ha-533645-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 14:13:51.163283   29532 main.go:141] libmachine: (ha-533645-m02) Creating domain...
	I0723 14:13:51.164259   29532 main.go:141] libmachine: (ha-533645-m02) define libvirt domain using xml: 
	I0723 14:13:51.164278   29532 main.go:141] libmachine: (ha-533645-m02) <domain type='kvm'>
	I0723 14:13:51.164288   29532 main.go:141] libmachine: (ha-533645-m02)   <name>ha-533645-m02</name>
	I0723 14:13:51.164312   29532 main.go:141] libmachine: (ha-533645-m02)   <memory unit='MiB'>2200</memory>
	I0723 14:13:51.164321   29532 main.go:141] libmachine: (ha-533645-m02)   <vcpu>2</vcpu>
	I0723 14:13:51.164332   29532 main.go:141] libmachine: (ha-533645-m02)   <features>
	I0723 14:13:51.164340   29532 main.go:141] libmachine: (ha-533645-m02)     <acpi/>
	I0723 14:13:51.164349   29532 main.go:141] libmachine: (ha-533645-m02)     <apic/>
	I0723 14:13:51.164376   29532 main.go:141] libmachine: (ha-533645-m02)     <pae/>
	I0723 14:13:51.164403   29532 main.go:141] libmachine: (ha-533645-m02)     
	I0723 14:13:51.164417   29532 main.go:141] libmachine: (ha-533645-m02)   </features>
	I0723 14:13:51.164433   29532 main.go:141] libmachine: (ha-533645-m02)   <cpu mode='host-passthrough'>
	I0723 14:13:51.164442   29532 main.go:141] libmachine: (ha-533645-m02)   
	I0723 14:13:51.164448   29532 main.go:141] libmachine: (ha-533645-m02)   </cpu>
	I0723 14:13:51.164453   29532 main.go:141] libmachine: (ha-533645-m02)   <os>
	I0723 14:13:51.164460   29532 main.go:141] libmachine: (ha-533645-m02)     <type>hvm</type>
	I0723 14:13:51.164465   29532 main.go:141] libmachine: (ha-533645-m02)     <boot dev='cdrom'/>
	I0723 14:13:51.164472   29532 main.go:141] libmachine: (ha-533645-m02)     <boot dev='hd'/>
	I0723 14:13:51.164479   29532 main.go:141] libmachine: (ha-533645-m02)     <bootmenu enable='no'/>
	I0723 14:13:51.164488   29532 main.go:141] libmachine: (ha-533645-m02)   </os>
	I0723 14:13:51.164507   29532 main.go:141] libmachine: (ha-533645-m02)   <devices>
	I0723 14:13:51.164519   29532 main.go:141] libmachine: (ha-533645-m02)     <disk type='file' device='cdrom'>
	I0723 14:13:51.164527   29532 main.go:141] libmachine: (ha-533645-m02)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/boot2docker.iso'/>
	I0723 14:13:51.164533   29532 main.go:141] libmachine: (ha-533645-m02)       <target dev='hdc' bus='scsi'/>
	I0723 14:13:51.164538   29532 main.go:141] libmachine: (ha-533645-m02)       <readonly/>
	I0723 14:13:51.164548   29532 main.go:141] libmachine: (ha-533645-m02)     </disk>
	I0723 14:13:51.164555   29532 main.go:141] libmachine: (ha-533645-m02)     <disk type='file' device='disk'>
	I0723 14:13:51.164563   29532 main.go:141] libmachine: (ha-533645-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 14:13:51.164571   29532 main.go:141] libmachine: (ha-533645-m02)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/ha-533645-m02.rawdisk'/>
	I0723 14:13:51.164578   29532 main.go:141] libmachine: (ha-533645-m02)       <target dev='hda' bus='virtio'/>
	I0723 14:13:51.164583   29532 main.go:141] libmachine: (ha-533645-m02)     </disk>
	I0723 14:13:51.164591   29532 main.go:141] libmachine: (ha-533645-m02)     <interface type='network'>
	I0723 14:13:51.164597   29532 main.go:141] libmachine: (ha-533645-m02)       <source network='mk-ha-533645'/>
	I0723 14:13:51.164603   29532 main.go:141] libmachine: (ha-533645-m02)       <model type='virtio'/>
	I0723 14:13:51.164609   29532 main.go:141] libmachine: (ha-533645-m02)     </interface>
	I0723 14:13:51.164619   29532 main.go:141] libmachine: (ha-533645-m02)     <interface type='network'>
	I0723 14:13:51.164624   29532 main.go:141] libmachine: (ha-533645-m02)       <source network='default'/>
	I0723 14:13:51.164631   29532 main.go:141] libmachine: (ha-533645-m02)       <model type='virtio'/>
	I0723 14:13:51.164637   29532 main.go:141] libmachine: (ha-533645-m02)     </interface>
	I0723 14:13:51.164642   29532 main.go:141] libmachine: (ha-533645-m02)     <serial type='pty'>
	I0723 14:13:51.164647   29532 main.go:141] libmachine: (ha-533645-m02)       <target port='0'/>
	I0723 14:13:51.164654   29532 main.go:141] libmachine: (ha-533645-m02)     </serial>
	I0723 14:13:51.164660   29532 main.go:141] libmachine: (ha-533645-m02)     <console type='pty'>
	I0723 14:13:51.164667   29532 main.go:141] libmachine: (ha-533645-m02)       <target type='serial' port='0'/>
	I0723 14:13:51.164672   29532 main.go:141] libmachine: (ha-533645-m02)     </console>
	I0723 14:13:51.164684   29532 main.go:141] libmachine: (ha-533645-m02)     <rng model='virtio'>
	I0723 14:13:51.164690   29532 main.go:141] libmachine: (ha-533645-m02)       <backend model='random'>/dev/random</backend>
	I0723 14:13:51.164697   29532 main.go:141] libmachine: (ha-533645-m02)     </rng>
	I0723 14:13:51.164702   29532 main.go:141] libmachine: (ha-533645-m02)     
	I0723 14:13:51.164706   29532 main.go:141] libmachine: (ha-533645-m02)     
	I0723 14:13:51.164711   29532 main.go:141] libmachine: (ha-533645-m02)   </devices>
	I0723 14:13:51.164715   29532 main.go:141] libmachine: (ha-533645-m02) </domain>
	I0723 14:13:51.164752   29532 main.go:141] libmachine: (ha-533645-m02) 
	I0723 14:13:51.171811   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:72:7a:52 in network default
	I0723 14:13:51.172363   29532 main.go:141] libmachine: (ha-533645-m02) Ensuring networks are active...
	I0723 14:13:51.172381   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:51.173135   29532 main.go:141] libmachine: (ha-533645-m02) Ensuring network default is active
	I0723 14:13:51.173472   29532 main.go:141] libmachine: (ha-533645-m02) Ensuring network mk-ha-533645 is active
	I0723 14:13:51.173838   29532 main.go:141] libmachine: (ha-533645-m02) Getting domain xml...
	I0723 14:13:51.174609   29532 main.go:141] libmachine: (ha-533645-m02) Creating domain...
	I0723 14:13:52.396694   29532 main.go:141] libmachine: (ha-533645-m02) Waiting to get IP...
	I0723 14:13:52.397454   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:52.397864   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:52.397892   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:52.397857   29960 retry.go:31] will retry after 291.455513ms: waiting for machine to come up
	I0723 14:13:52.691665   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:52.692234   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:52.692259   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:52.692186   29960 retry.go:31] will retry after 276.688811ms: waiting for machine to come up
	I0723 14:13:52.970744   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:52.971146   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:52.971175   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:52.971098   29960 retry.go:31] will retry after 321.108369ms: waiting for machine to come up
	I0723 14:13:53.294049   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:53.294465   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:53.294496   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:53.294421   29960 retry.go:31] will retry after 579.782128ms: waiting for machine to come up
	I0723 14:13:53.876292   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:53.876738   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:53.876765   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:53.876696   29960 retry.go:31] will retry after 533.186824ms: waiting for machine to come up
	I0723 14:13:54.411515   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:54.411942   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:54.411964   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:54.411913   29960 retry.go:31] will retry after 659.951767ms: waiting for machine to come up
	I0723 14:13:55.073839   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:55.074392   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:55.074426   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:55.074328   29960 retry.go:31] will retry after 915.678094ms: waiting for machine to come up
	I0723 14:13:55.991449   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:55.991897   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:55.991926   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:55.991848   29960 retry.go:31] will retry after 1.130153568s: waiting for machine to come up
	I0723 14:13:57.124226   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:57.124793   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:57.124821   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:57.124744   29960 retry.go:31] will retry after 1.350718893s: waiting for machine to come up
	I0723 14:13:58.477352   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:13:58.477782   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:13:58.477805   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:13:58.477740   29960 retry.go:31] will retry after 2.162424933s: waiting for machine to come up
	I0723 14:14:00.642131   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:00.642561   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:00.642587   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:00.642529   29960 retry.go:31] will retry after 1.904873624s: waiting for machine to come up
	I0723 14:14:02.548616   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:02.549141   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:02.549171   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:02.549073   29960 retry.go:31] will retry after 2.896313096s: waiting for machine to come up
	I0723 14:14:05.449196   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:05.449740   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:05.449767   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:05.449703   29960 retry.go:31] will retry after 4.145626381s: waiting for machine to come up
	I0723 14:14:09.599382   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:09.599737   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find current IP address of domain ha-533645-m02 in network mk-ha-533645
	I0723 14:14:09.599760   29532 main.go:141] libmachine: (ha-533645-m02) DBG | I0723 14:14:09.599691   29960 retry.go:31] will retry after 3.465080003s: waiting for machine to come up
	I0723 14:14:13.067839   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.068249   29532 main.go:141] libmachine: (ha-533645-m02) Found IP for machine: 192.168.39.182
	I0723 14:14:13.068274   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has current primary IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.068282   29532 main.go:141] libmachine: (ha-533645-m02) Reserving static IP address...
	I0723 14:14:13.068684   29532 main.go:141] libmachine: (ha-533645-m02) DBG | unable to find host DHCP lease matching {name: "ha-533645-m02", mac: "52:54:00:a0:97:d5", ip: "192.168.39.182"} in network mk-ha-533645
	I0723 14:14:13.138284   29532 main.go:141] libmachine: (ha-533645-m02) Reserved static IP address: 192.168.39.182
	I0723 14:14:13.138317   29532 main.go:141] libmachine: (ha-533645-m02) Waiting for SSH to be available...
	I0723 14:14:13.138327   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Getting to WaitForSSH function...
	I0723 14:14:13.141165   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.141569   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.141598   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.141774   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Using SSH client type: external
	I0723 14:14:13.141800   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa (-rw-------)
	I0723 14:14:13.141828   29532 main.go:141] libmachine: (ha-533645-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 14:14:13.141841   29532 main.go:141] libmachine: (ha-533645-m02) DBG | About to run SSH command:
	I0723 14:14:13.141855   29532 main.go:141] libmachine: (ha-533645-m02) DBG | exit 0
	I0723 14:14:13.266560   29532 main.go:141] libmachine: (ha-533645-m02) DBG | SSH cmd err, output: <nil>: 
	I0723 14:14:13.266861   29532 main.go:141] libmachine: (ha-533645-m02) KVM machine creation complete!
	I0723 14:14:13.267104   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetConfigRaw
	I0723 14:14:13.267679   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:13.267903   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:13.268102   29532 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 14:14:13.268116   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:14:13.269460   29532 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 14:14:13.269473   29532 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 14:14:13.269478   29532 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 14:14:13.269485   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.271813   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.272192   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.272219   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.272354   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.272509   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.272665   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.272786   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.272980   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.273160   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.273176   29532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 14:14:13.381609   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:14:13.381631   29532 main.go:141] libmachine: Detecting the provisioner...
	I0723 14:14:13.381638   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.384393   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.384736   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.384765   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.384918   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.385127   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.385361   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.385593   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.385776   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.386028   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.386048   29532 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 14:14:13.494780   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 14:14:13.494834   29532 main.go:141] libmachine: found compatible host: buildroot
	I0723 14:14:13.494841   29532 main.go:141] libmachine: Provisioning with buildroot...
	I0723 14:14:13.494849   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:14:13.495134   29532 buildroot.go:166] provisioning hostname "ha-533645-m02"
	I0723 14:14:13.495166   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:14:13.495365   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.498165   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.498614   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.498642   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.498791   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.498929   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.499081   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.499192   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.499333   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.499478   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.499491   29532 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645-m02 && echo "ha-533645-m02" | sudo tee /etc/hostname
	I0723 14:14:13.620479   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645-m02
	
	I0723 14:14:13.620506   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.623524   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.623854   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.623878   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.624047   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.624242   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.624397   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.624548   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.624723   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:13.624920   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:13.624938   29532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:14:13.738808   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:14:13.738830   29532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:14:13.738844   29532 buildroot.go:174] setting up certificates
	I0723 14:14:13.738854   29532 provision.go:84] configureAuth start
	I0723 14:14:13.738862   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetMachineName
	I0723 14:14:13.739159   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:13.741541   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.741917   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.741942   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.742108   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.744426   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.744774   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.744791   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.744960   29532 provision.go:143] copyHostCerts
	I0723 14:14:13.744988   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:14:13.745022   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:14:13.745035   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:14:13.745108   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:14:13.745217   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:14:13.745242   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:14:13.745250   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:14:13.745285   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:14:13.745349   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:14:13.745372   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:14:13.745381   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:14:13.745414   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:14:13.745476   29532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645-m02 san=[127.0.0.1 192.168.39.182 ha-533645-m02 localhost minikube]
	I0723 14:14:13.978917   29532 provision.go:177] copyRemoteCerts
	I0723 14:14:13.978974   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:14:13.978995   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:13.981686   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.982008   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:13.982038   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:13.982268   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:13.982483   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:13.982661   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:13.982822   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.064211   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:14:14.064274   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:14:14.087261   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:14:14.087349   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 14:14:14.109351   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:14:14.109428   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 14:14:14.131249   29532 provision.go:87] duration metric: took 392.38503ms to configureAuth
	I0723 14:14:14.131274   29532 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:14:14.131449   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:14:14.131511   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.134184   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.134589   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.134618   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.134772   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.134967   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.135154   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.135294   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.135463   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:14.135654   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:14.135670   29532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:14:14.396639   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:14:14.396671   29532 main.go:141] libmachine: Checking connection to Docker...
	I0723 14:14:14.396682   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetURL
	I0723 14:14:14.398000   29532 main.go:141] libmachine: (ha-533645-m02) DBG | Using libvirt version 6000000
	I0723 14:14:14.400069   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.400435   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.400461   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.400643   29532 main.go:141] libmachine: Docker is up and running!
	I0723 14:14:14.400665   29532 main.go:141] libmachine: Reticulating splines...
	I0723 14:14:14.400673   29532 client.go:171] duration metric: took 23.798481003s to LocalClient.Create
	I0723 14:14:14.400693   29532 start.go:167] duration metric: took 23.798536032s to libmachine.API.Create "ha-533645"
	I0723 14:14:14.400703   29532 start.go:293] postStartSetup for "ha-533645-m02" (driver="kvm2")
	I0723 14:14:14.400715   29532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:14:14.400740   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.400983   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:14:14.401004   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.402975   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.403300   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.403327   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.403514   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.403695   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.403845   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.403980   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.489386   29532 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:14:14.493473   29532 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:14:14.493496   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:14:14.493567   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:14:14.493636   29532 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:14:14.493645   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:14:14.493719   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:14:14.502656   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:14:14.524105   29532 start.go:296] duration metric: took 123.388205ms for postStartSetup
	I0723 14:14:14.524151   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetConfigRaw
	I0723 14:14:14.524729   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:14.527071   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.527484   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.527511   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.527748   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:14:14.527926   29532 start.go:128] duration metric: took 23.943845027s to createHost
	I0723 14:14:14.527948   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.529894   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.530255   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.530281   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.530512   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.530712   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.530871   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.531025   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.531275   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:14:14.531427   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0723 14:14:14.531437   29532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:14:14.639058   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744054.616220575
	
	I0723 14:14:14.639082   29532 fix.go:216] guest clock: 1721744054.616220575
	I0723 14:14:14.639131   29532 fix.go:229] Guest: 2024-07-23 14:14:14.616220575 +0000 UTC Remote: 2024-07-23 14:14:14.527937381 +0000 UTC m=+75.888805407 (delta=88.283194ms)
	I0723 14:14:14.639157   29532 fix.go:200] guest clock delta is within tolerance: 88.283194ms
	I0723 14:14:14.639165   29532 start.go:83] releasing machines lock for "ha-533645-m02", held for 24.055164779s
	I0723 14:14:14.639187   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.639458   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:14.641765   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.642062   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.642089   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.644382   29532 out.go:177] * Found network options:
	I0723 14:14:14.645811   29532 out.go:177]   - NO_PROXY=192.168.39.103
	W0723 14:14:14.646900   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:14:14.646929   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.647393   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.647568   29532 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:14:14.647655   29532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:14:14.647703   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	W0723 14:14:14.647801   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:14:14.647872   29532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:14:14.647893   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:14:14.650400   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.650654   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.650820   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.650846   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.650991   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:14.651012   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.651018   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:14.651198   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:14:14.651229   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.651375   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.651378   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:14:14.651528   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:14:14.651527   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.651675   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:14:14.880669   29532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:14:14.886780   29532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:14:14.886840   29532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:14:14.901863   29532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 14:14:14.901889   29532 start.go:495] detecting cgroup driver to use...
	I0723 14:14:14.901942   29532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:14:14.918281   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:14:14.932298   29532 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:14:14.932370   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:14:14.945919   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:14:14.960255   29532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:14:15.099840   29532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:14:15.247026   29532 docker.go:233] disabling docker service ...
	I0723 14:14:15.247105   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:14:15.261726   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:14:15.275008   29532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:14:15.413571   29532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:14:15.545731   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:14:15.558812   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:14:15.576442   29532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:14:15.576511   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.586249   29532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:14:15.586315   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.595885   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.606494   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.616503   29532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:14:15.626527   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.636291   29532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.651849   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:14:15.661721   29532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:14:15.670999   29532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 14:14:15.671064   29532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 14:14:15.683748   29532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:14:15.692463   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:14:15.826299   29532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:14:15.963799   29532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:14:15.963867   29532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:14:15.968898   29532 start.go:563] Will wait 60s for crictl version
	I0723 14:14:15.968960   29532 ssh_runner.go:195] Run: which crictl
	I0723 14:14:15.972395   29532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:14:16.014002   29532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:14:16.014084   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:14:16.041646   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:14:16.071891   29532 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:14:16.073468   29532 out.go:177]   - env NO_PROXY=192.168.39.103
	I0723 14:14:16.074794   29532 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:14:16.077996   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:16.078474   29532 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:14:04 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:14:16.078503   29532 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:14:16.078713   29532 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:14:16.082668   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:14:16.094157   29532 mustload.go:65] Loading cluster: ha-533645
	I0723 14:14:16.094392   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:14:16.094788   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:14:16.094827   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:14:16.109938   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0723 14:14:16.110421   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:14:16.110893   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:14:16.110913   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:14:16.111282   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:14:16.111518   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:14:16.113111   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:14:16.113400   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:14:16.113429   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:14:16.127908   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I0723 14:14:16.128363   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:14:16.128829   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:14:16.128852   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:14:16.129140   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:14:16.129377   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:14:16.129547   29532 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.182
	I0723 14:14:16.129559   29532 certs.go:194] generating shared ca certs ...
	I0723 14:14:16.129571   29532 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:14:16.129684   29532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:14:16.129721   29532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:14:16.129727   29532 certs.go:256] generating profile certs ...
	I0723 14:14:16.129786   29532 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:14:16.129810   29532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93
	I0723 14:14:16.129822   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.182 192.168.39.254]
	I0723 14:14:16.240824   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93 ...
	I0723 14:14:16.240856   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93: {Name:mkf9d33d57e4f2ae7e43ba01e73119266f40336d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:14:16.241018   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93 ...
	I0723 14:14:16.241030   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93: {Name:mk6277f2ca8f2772f186f6bb140a40234df422b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:14:16.241099   29532 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.607a4d93 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:14:16.241226   29532 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.607a4d93 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:14:16.241346   29532 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:14:16.241361   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:14:16.241373   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:14:16.241385   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:14:16.241395   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:14:16.241407   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:14:16.241420   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:14:16.241432   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:14:16.241444   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:14:16.241488   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:14:16.241519   29532 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:14:16.241528   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:14:16.241549   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:14:16.241569   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:14:16.241590   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:14:16.241631   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:14:16.241656   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.241671   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.241682   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.241712   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:14:16.244564   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:16.245153   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:14:16.245181   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:16.245351   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:14:16.245561   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:14:16.245693   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:14:16.245836   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:14:16.322863   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0723 14:14:16.327760   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0723 14:14:16.338466   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0723 14:14:16.342374   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0723 14:14:16.352176   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0723 14:14:16.356217   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0723 14:14:16.365902   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0723 14:14:16.369997   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0723 14:14:16.380157   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0723 14:14:16.384290   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0723 14:14:16.395742   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0723 14:14:16.400204   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0723 14:14:16.412059   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:14:16.436169   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:14:16.461338   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:14:16.484637   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:14:16.507795   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0723 14:14:16.529936   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:14:16.553134   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:14:16.576152   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:14:16.604070   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:14:16.626957   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:14:16.648213   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:14:16.669192   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0723 14:14:16.683778   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0723 14:14:16.698487   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0723 14:14:16.713322   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0723 14:14:16.728095   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0723 14:14:16.742720   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0723 14:14:16.757481   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0723 14:14:16.773001   29532 ssh_runner.go:195] Run: openssl version
	I0723 14:14:16.778525   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:14:16.788682   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.792836   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.792904   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:14:16.798288   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:14:16.808807   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:14:16.818637   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.822738   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.822776   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:14:16.827819   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:14:16.837444   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:14:16.847316   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.851435   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.851492   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:14:16.856731   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:14:16.866308   29532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:14:16.869873   29532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:14:16.869929   29532 kubeadm.go:934] updating node {m02 192.168.39.182 8443 v1.30.3 crio true true} ...
	I0723 14:14:16.870021   29532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:14:16.870049   29532 kube-vip.go:115] generating kube-vip config ...
	I0723 14:14:16.870084   29532 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:14:16.886068   29532 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:14:16.886136   29532 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:14:16.886200   29532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:14:16.895349   29532 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0723 14:14:16.895407   29532 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0723 14:14:16.906476   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0723 14:14:16.906508   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:14:16.906544   29532 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0723 14:14:16.906599   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:14:16.906643   29532 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0723 14:14:16.910632   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0723 14:14:16.910674   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0723 14:14:29.349175   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:14:29.349257   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:14:29.354160   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0723 14:14:29.354194   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0723 14:14:42.245752   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:14:42.260722   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:14:42.260826   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:14:42.264854   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0723 14:14:42.264884   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0723 14:14:42.628617   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0723 14:14:42.637383   29532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0723 14:14:42.653188   29532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:14:42.668190   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0723 14:14:42.682869   29532 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:14:42.686308   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:14:42.697199   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:14:42.805737   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:14:42.821471   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:14:42.821937   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:14:42.821976   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:14:42.836985   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37557
	I0723 14:14:42.837466   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:14:42.837978   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:14:42.838003   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:14:42.838280   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:14:42.838489   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:14:42.838659   29532 start.go:317] joinCluster: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:14:42.838750   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0723 14:14:42.838765   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:14:42.841670   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:42.842072   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:14:42.842085   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:14:42.842312   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:14:42.842521   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:14:42.842702   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:14:42.842885   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:14:43.002055   29532 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:14:43.002102   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7ycf2e.biroaztat8xgm11s --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m02 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443"
	I0723 14:15:05.497769   29532 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7ycf2e.biroaztat8xgm11s --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m02 --control-plane --apiserver-advertise-address=192.168.39.182 --apiserver-bind-port=8443": (22.495634398s)
	I0723 14:15:05.497814   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0723 14:15:06.028807   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-533645-m02 minikube.k8s.io/updated_at=2024_07_23T14_15_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=ha-533645 minikube.k8s.io/primary=false
	I0723 14:15:06.176625   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-533645-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0723 14:15:06.285185   29532 start.go:319] duration metric: took 23.446521558s to joinCluster
	I0723 14:15:06.285272   29532 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:15:06.285577   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:06.286986   29532 out.go:177] * Verifying Kubernetes components...
	I0723 14:15:06.288823   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:15:06.512358   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:15:06.552647   29532 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:15:06.552860   29532 kapi.go:59] client config for ha-533645: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt", KeyFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key", CAFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0723 14:15:06.552913   29532 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.103:8443
	I0723 14:15:06.553076   29532 node_ready.go:35] waiting up to 6m0s for node "ha-533645-m02" to be "Ready" ...
	I0723 14:15:06.553154   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:06.553163   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:06.553170   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:06.553175   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:06.564244   29532 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0723 14:15:07.053315   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:07.053336   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:07.053344   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:07.053346   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:07.062284   29532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0723 14:15:07.553707   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:07.553727   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:07.553736   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:07.553740   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:07.556934   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:08.053376   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:08.053396   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:08.053404   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:08.053409   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:08.056563   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:08.553550   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:08.553574   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:08.553581   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:08.553588   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:08.556881   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:08.557500   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:09.053608   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:09.053628   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:09.053636   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:09.053640   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:09.056661   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:09.553509   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:09.553530   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:09.553538   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:09.553542   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:09.556768   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:10.053832   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:10.053853   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:10.053860   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:10.053863   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:10.057580   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:10.553837   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:10.553857   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:10.553865   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:10.553869   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:10.558249   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:15:10.559156   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:11.054143   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:11.054163   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:11.054175   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:11.054181   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:11.068256   29532 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0723 14:15:11.554175   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:11.554194   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:11.554201   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:11.554204   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:11.557457   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:12.053540   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:12.053565   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:12.053572   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:12.053576   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:12.056752   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:12.553600   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:12.553621   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:12.553630   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:12.553635   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:12.557709   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:15:13.053642   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:13.053662   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:13.053673   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:13.053680   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:13.057427   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:13.058173   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:13.553601   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:13.553626   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:13.553637   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:13.553643   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:13.556870   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:14.053505   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:14.053528   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:14.053534   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:14.053538   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:14.057100   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:14.554147   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:14.554169   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:14.554177   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:14.554182   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:14.557559   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:15.053439   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:15.053461   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:15.053469   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:15.053476   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:15.057015   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:15.553398   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:15.553419   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:15.553426   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:15.553429   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:15.556910   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:15.557403   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:16.053333   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:16.053355   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:16.053364   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:16.053369   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:16.056997   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:16.554116   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:16.554157   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:16.554168   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:16.554173   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:16.557450   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:17.053458   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:17.053480   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:17.053488   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:17.053491   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:17.058211   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:15:17.553611   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:17.553633   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:17.553640   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:17.553643   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:17.557491   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:17.558562   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:18.053352   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:18.053373   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:18.053381   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:18.053386   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:18.057023   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:18.553374   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:18.553394   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:18.553402   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:18.553405   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:18.556503   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:19.053385   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:19.053412   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:19.053423   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:19.053429   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:19.056635   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:19.553607   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:19.553642   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:19.553656   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:19.553661   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:19.557131   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:20.054066   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:20.054088   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:20.054096   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:20.054102   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:20.057285   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:20.057829   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:20.554189   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:20.554210   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:20.554218   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:20.554223   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:20.557832   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:21.053561   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:21.053586   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:21.053594   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:21.053599   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:21.056689   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:21.553511   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:21.553532   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:21.553540   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:21.553544   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:21.556479   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.053389   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:22.053411   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.053419   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.053424   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.057335   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.058158   29532 node_ready.go:53] node "ha-533645-m02" has status "Ready":"False"
	I0723 14:15:22.553487   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:22.553510   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.553519   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.553525   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.556650   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.557345   29532 node_ready.go:49] node "ha-533645-m02" has status "Ready":"True"
	I0723 14:15:22.557361   29532 node_ready.go:38] duration metric: took 16.004270893s for node "ha-533645-m02" to be "Ready" ...
	I0723 14:15:22.557369   29532 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:15:22.557440   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:22.557448   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.557455   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.557460   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.563161   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:15:22.571822   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.571900   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nrvbf
	I0723 14:15:22.571909   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.571917   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.571923   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.574992   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.575535   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.575552   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.575562   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.575567   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.578041   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.578615   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.578632   29532 pod_ready.go:81] duration metric: took 6.781836ms for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.578640   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.578695   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s6xzz
	I0723 14:15:22.578703   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.578710   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.578716   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.581333   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.581849   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.581863   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.581870   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.581874   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.584576   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.585055   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.585076   29532 pod_ready.go:81] duration metric: took 6.428477ms for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.585088   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.585142   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645
	I0723 14:15:22.585153   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.585162   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.585172   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.587839   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.588446   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.588462   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.588472   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.588477   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.590757   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.591153   29532 pod_ready.go:92] pod "etcd-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.591168   29532 pod_ready.go:81] duration metric: took 6.073744ms for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.591175   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.591218   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m02
	I0723 14:15:22.591225   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.591231   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.591235   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.594527   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.595556   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:22.595580   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.595587   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.595590   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.598114   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:22.598946   29532 pod_ready.go:92] pod "etcd-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.598963   29532 pod_ready.go:81] duration metric: took 7.781381ms for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.598975   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.754360   29532 request.go:629] Waited for 155.3269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:15:22.754437   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:15:22.754446   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.754465   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.754489   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.757940   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.954303   29532 request.go:629] Waited for 195.464577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.954362   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:22.954370   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:22.954388   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:22.954393   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:22.957713   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:22.958542   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:22.958560   29532 pod_ready.go:81] duration metric: took 359.578068ms for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:22.958576   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.153796   29532 request.go:629] Waited for 195.144154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:15:23.153868   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:15:23.153876   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.153886   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.153892   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.156933   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.354034   29532 request.go:629] Waited for 196.349856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:23.354081   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:23.354086   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.354093   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.354096   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.357388   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.358259   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:23.358279   29532 pod_ready.go:81] duration metric: took 399.695547ms for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.358288   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.554434   29532 request.go:629] Waited for 196.043801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:15:23.554498   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:15:23.554506   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.554517   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.554525   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.558177   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.754141   29532 request.go:629] Waited for 195.143969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:23.754192   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:23.754197   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.754205   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.754209   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.757663   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:23.758349   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:23.758365   29532 pod_ready.go:81] duration metric: took 400.070197ms for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.758388   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:23.953895   29532 request.go:629] Waited for 195.443343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:15:23.953947   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:15:23.953952   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:23.953959   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:23.953965   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:23.957273   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.154252   29532 request.go:629] Waited for 196.38583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.154326   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.154335   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.154345   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.154351   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.157728   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.158235   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:24.158251   29532 pod_ready.go:81] duration metric: took 399.855851ms for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.158261   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.354422   29532 request.go:629] Waited for 196.077704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:15:24.354478   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:15:24.354483   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.354490   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.354494   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.357783   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.553987   29532 request.go:629] Waited for 195.349961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:24.554065   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:24.554073   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.554082   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.554087   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.557585   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.558375   29532 pod_ready.go:92] pod "kube-proxy-9wh4w" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:24.558406   29532 pod_ready.go:81] duration metric: took 400.138962ms for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.558415   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.754546   29532 request.go:629] Waited for 196.071606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:15:24.754624   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:15:24.754631   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.754641   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.754648   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.758475   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.954351   29532 request.go:629] Waited for 195.353695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.954440   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:24.954471   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:24.954483   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:24.954488   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:24.957901   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:24.958672   29532 pod_ready.go:92] pod "kube-proxy-p25cg" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:24.958692   29532 pod_ready.go:81] duration metric: took 400.271263ms for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:24.958701   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.153810   29532 request.go:629] Waited for 195.044638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:15:25.153904   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:15:25.153915   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.153926   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.153936   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.157074   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:25.353933   29532 request.go:629] Waited for 196.378685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:25.354009   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:15:25.354016   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.354024   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.354031   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.356883   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:15:25.357372   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:25.357394   29532 pod_ready.go:81] duration metric: took 398.68599ms for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.357408   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.554489   29532 request.go:629] Waited for 197.006685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:15:25.554577   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:15:25.554587   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.554598   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.554604   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.558571   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:25.753964   29532 request.go:629] Waited for 194.510316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:25.754021   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:15:25.754026   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.754034   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.754038   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.757353   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:25.757786   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:15:25.757804   29532 pod_ready.go:81] duration metric: took 400.387585ms for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:15:25.757819   29532 pod_ready.go:38] duration metric: took 3.200422142s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:15:25.757843   29532 api_server.go:52] waiting for apiserver process to appear ...
	I0723 14:15:25.757902   29532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:15:25.772757   29532 api_server.go:72] duration metric: took 19.487449649s to wait for apiserver process to appear ...
	I0723 14:15:25.772781   29532 api_server.go:88] waiting for apiserver healthz status ...
	I0723 14:15:25.772797   29532 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0723 14:15:25.776956   29532 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0723 14:15:25.777014   29532 round_trippers.go:463] GET https://192.168.39.103:8443/version
	I0723 14:15:25.777020   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.777034   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.777043   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.777964   29532 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0723 14:15:25.778048   29532 api_server.go:141] control plane version: v1.30.3
	I0723 14:15:25.778062   29532 api_server.go:131] duration metric: took 5.275939ms to wait for apiserver health ...
	I0723 14:15:25.778068   29532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 14:15:25.954458   29532 request.go:629] Waited for 176.335463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:25.954525   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:25.954531   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:25.954539   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:25.954543   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:25.959586   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:15:25.963823   29532 system_pods.go:59] 17 kube-system pods found
	I0723 14:15:25.963848   29532 system_pods.go:61] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:15:25.963852   29532 system_pods.go:61] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:15:25.963856   29532 system_pods.go:61] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:15:25.963860   29532 system_pods.go:61] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:15:25.963864   29532 system_pods.go:61] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:15:25.963868   29532 system_pods.go:61] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:15:25.963871   29532 system_pods.go:61] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:15:25.963875   29532 system_pods.go:61] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:15:25.963878   29532 system_pods.go:61] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:15:25.963882   29532 system_pods.go:61] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:15:25.963887   29532 system_pods.go:61] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:15:25.963890   29532 system_pods.go:61] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:15:25.963896   29532 system_pods.go:61] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:15:25.963900   29532 system_pods.go:61] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:15:25.963905   29532 system_pods.go:61] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:15:25.963908   29532 system_pods.go:61] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:15:25.963913   29532 system_pods.go:61] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:15:25.963919   29532 system_pods.go:74] duration metric: took 185.845925ms to wait for pod list to return data ...
	I0723 14:15:25.963928   29532 default_sa.go:34] waiting for default service account to be created ...
	I0723 14:15:26.153552   29532 request.go:629] Waited for 189.561602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:15:26.153613   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:15:26.153619   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:26.153628   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:26.153638   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:26.157078   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:26.157313   29532 default_sa.go:45] found service account: "default"
	I0723 14:15:26.157331   29532 default_sa.go:55] duration metric: took 193.397665ms for default service account to be created ...
	I0723 14:15:26.157339   29532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 14:15:26.353699   29532 request.go:629] Waited for 196.295451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:26.353751   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:15:26.353756   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:26.353763   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:26.353766   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:26.358912   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:15:26.364015   29532 system_pods.go:86] 17 kube-system pods found
	I0723 14:15:26.364040   29532 system_pods.go:89] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:15:26.364047   29532 system_pods.go:89] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:15:26.364053   29532 system_pods.go:89] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:15:26.364059   29532 system_pods.go:89] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:15:26.364064   29532 system_pods.go:89] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:15:26.364069   29532 system_pods.go:89] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:15:26.364075   29532 system_pods.go:89] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:15:26.364081   29532 system_pods.go:89] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:15:26.364090   29532 system_pods.go:89] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:15:26.364100   29532 system_pods.go:89] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:15:26.364106   29532 system_pods.go:89] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:15:26.364112   29532 system_pods.go:89] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:15:26.364120   29532 system_pods.go:89] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:15:26.364128   29532 system_pods.go:89] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:15:26.364136   29532 system_pods.go:89] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:15:26.364142   29532 system_pods.go:89] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:15:26.364148   29532 system_pods.go:89] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:15:26.364159   29532 system_pods.go:126] duration metric: took 206.814001ms to wait for k8s-apps to be running ...
	I0723 14:15:26.364171   29532 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 14:15:26.364220   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:15:26.378922   29532 system_svc.go:56] duration metric: took 14.740952ms WaitForService to wait for kubelet
	I0723 14:15:26.378954   29532 kubeadm.go:582] duration metric: took 20.093650935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:15:26.378973   29532 node_conditions.go:102] verifying NodePressure condition ...
	I0723 14:15:26.554375   29532 request.go:629] Waited for 175.329684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes
	I0723 14:15:26.554473   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes
	I0723 14:15:26.554481   29532 round_trippers.go:469] Request Headers:
	I0723 14:15:26.554490   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:15:26.554496   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:15:26.558473   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:15:26.559158   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:15:26.559182   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:15:26.559197   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:15:26.559202   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:15:26.559207   29532 node_conditions.go:105] duration metric: took 180.230463ms to run NodePressure ...
	I0723 14:15:26.559220   29532 start.go:241] waiting for startup goroutines ...
	I0723 14:15:26.559249   29532 start.go:255] writing updated cluster config ...
	I0723 14:15:26.561275   29532 out.go:177] 
	I0723 14:15:26.562673   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:26.562784   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:15:26.564481   29532 out.go:177] * Starting "ha-533645-m03" control-plane node in "ha-533645" cluster
	I0723 14:15:26.565768   29532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:15:26.565799   29532 cache.go:56] Caching tarball of preloaded images
	I0723 14:15:26.565893   29532 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:15:26.565904   29532 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:15:26.565986   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:15:26.566151   29532 start.go:360] acquireMachinesLock for ha-533645-m03: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:15:26.566192   29532 start.go:364] duration metric: took 22.445µs to acquireMachinesLock for "ha-533645-m03"
	I0723 14:15:26.566206   29532 start.go:93] Provisioning new machine with config: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:15:26.566323   29532 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0723 14:15:26.567992   29532 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 14:15:26.568078   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:26.568111   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:26.583205   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I0723 14:15:26.583743   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:26.584212   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:26.584230   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:26.584540   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:26.584713   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:26.584827   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:26.584930   29532 start.go:159] libmachine.API.Create for "ha-533645" (driver="kvm2")
	I0723 14:15:26.584955   29532 client.go:168] LocalClient.Create starting
	I0723 14:15:26.584983   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 14:15:26.585019   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:15:26.585033   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:15:26.585078   29532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 14:15:26.585094   29532 main.go:141] libmachine: Decoding PEM data...
	I0723 14:15:26.585102   29532 main.go:141] libmachine: Parsing certificate...
	I0723 14:15:26.585118   29532 main.go:141] libmachine: Running pre-create checks...
	I0723 14:15:26.585126   29532 main.go:141] libmachine: (ha-533645-m03) Calling .PreCreateCheck
	I0723 14:15:26.585334   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetConfigRaw
	I0723 14:15:26.585749   29532 main.go:141] libmachine: Creating machine...
	I0723 14:15:26.585763   29532 main.go:141] libmachine: (ha-533645-m03) Calling .Create
	I0723 14:15:26.585874   29532 main.go:141] libmachine: (ha-533645-m03) Creating KVM machine...
	I0723 14:15:26.587216   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found existing default KVM network
	I0723 14:15:26.587421   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found existing private KVM network mk-ha-533645
	I0723 14:15:26.587535   29532 main.go:141] libmachine: (ha-533645-m03) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03 ...
	I0723 14:15:26.587558   29532 main.go:141] libmachine: (ha-533645-m03) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 14:15:26.587657   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:26.587547   30443 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:15:26.587721   29532 main.go:141] libmachine: (ha-533645-m03) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 14:15:26.820566   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:26.820456   30443 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa...
	I0723 14:15:27.015161   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:27.015020   30443 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/ha-533645-m03.rawdisk...
	I0723 14:15:27.015198   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Writing magic tar header
	I0723 14:15:27.015216   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Writing SSH key tar header
	I0723 14:15:27.015234   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:27.015138   30443 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03 ...
	I0723 14:15:27.015252   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03
	I0723 14:15:27.015319   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03 (perms=drwx------)
	I0723 14:15:27.015344   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 14:15:27.015355   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 14:15:27.015373   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 14:15:27.015385   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 14:15:27.015399   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 14:15:27.015412   29532 main.go:141] libmachine: (ha-533645-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 14:15:27.015425   29532 main.go:141] libmachine: (ha-533645-m03) Creating domain...
	I0723 14:15:27.015439   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:15:27.015451   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 14:15:27.015463   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 14:15:27.015473   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home/jenkins
	I0723 14:15:27.015510   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Checking permissions on dir: /home
	I0723 14:15:27.015536   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Skipping /home - not owner
	I0723 14:15:27.016417   29532 main.go:141] libmachine: (ha-533645-m03) define libvirt domain using xml: 
	I0723 14:15:27.016438   29532 main.go:141] libmachine: (ha-533645-m03) <domain type='kvm'>
	I0723 14:15:27.016446   29532 main.go:141] libmachine: (ha-533645-m03)   <name>ha-533645-m03</name>
	I0723 14:15:27.016455   29532 main.go:141] libmachine: (ha-533645-m03)   <memory unit='MiB'>2200</memory>
	I0723 14:15:27.016462   29532 main.go:141] libmachine: (ha-533645-m03)   <vcpu>2</vcpu>
	I0723 14:15:27.016470   29532 main.go:141] libmachine: (ha-533645-m03)   <features>
	I0723 14:15:27.016482   29532 main.go:141] libmachine: (ha-533645-m03)     <acpi/>
	I0723 14:15:27.016489   29532 main.go:141] libmachine: (ha-533645-m03)     <apic/>
	I0723 14:15:27.016498   29532 main.go:141] libmachine: (ha-533645-m03)     <pae/>
	I0723 14:15:27.016504   29532 main.go:141] libmachine: (ha-533645-m03)     
	I0723 14:15:27.016517   29532 main.go:141] libmachine: (ha-533645-m03)   </features>
	I0723 14:15:27.016527   29532 main.go:141] libmachine: (ha-533645-m03)   <cpu mode='host-passthrough'>
	I0723 14:15:27.016552   29532 main.go:141] libmachine: (ha-533645-m03)   
	I0723 14:15:27.016573   29532 main.go:141] libmachine: (ha-533645-m03)   </cpu>
	I0723 14:15:27.016585   29532 main.go:141] libmachine: (ha-533645-m03)   <os>
	I0723 14:15:27.016596   29532 main.go:141] libmachine: (ha-533645-m03)     <type>hvm</type>
	I0723 14:15:27.016609   29532 main.go:141] libmachine: (ha-533645-m03)     <boot dev='cdrom'/>
	I0723 14:15:27.016620   29532 main.go:141] libmachine: (ha-533645-m03)     <boot dev='hd'/>
	I0723 14:15:27.016634   29532 main.go:141] libmachine: (ha-533645-m03)     <bootmenu enable='no'/>
	I0723 14:15:27.016648   29532 main.go:141] libmachine: (ha-533645-m03)   </os>
	I0723 14:15:27.016658   29532 main.go:141] libmachine: (ha-533645-m03)   <devices>
	I0723 14:15:27.016668   29532 main.go:141] libmachine: (ha-533645-m03)     <disk type='file' device='cdrom'>
	I0723 14:15:27.016685   29532 main.go:141] libmachine: (ha-533645-m03)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/boot2docker.iso'/>
	I0723 14:15:27.016697   29532 main.go:141] libmachine: (ha-533645-m03)       <target dev='hdc' bus='scsi'/>
	I0723 14:15:27.016709   29532 main.go:141] libmachine: (ha-533645-m03)       <readonly/>
	I0723 14:15:27.016723   29532 main.go:141] libmachine: (ha-533645-m03)     </disk>
	I0723 14:15:27.016739   29532 main.go:141] libmachine: (ha-533645-m03)     <disk type='file' device='disk'>
	I0723 14:15:27.016751   29532 main.go:141] libmachine: (ha-533645-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 14:15:27.016765   29532 main.go:141] libmachine: (ha-533645-m03)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/ha-533645-m03.rawdisk'/>
	I0723 14:15:27.016776   29532 main.go:141] libmachine: (ha-533645-m03)       <target dev='hda' bus='virtio'/>
	I0723 14:15:27.016788   29532 main.go:141] libmachine: (ha-533645-m03)     </disk>
	I0723 14:15:27.016803   29532 main.go:141] libmachine: (ha-533645-m03)     <interface type='network'>
	I0723 14:15:27.016816   29532 main.go:141] libmachine: (ha-533645-m03)       <source network='mk-ha-533645'/>
	I0723 14:15:27.016831   29532 main.go:141] libmachine: (ha-533645-m03)       <model type='virtio'/>
	I0723 14:15:27.016842   29532 main.go:141] libmachine: (ha-533645-m03)     </interface>
	I0723 14:15:27.016850   29532 main.go:141] libmachine: (ha-533645-m03)     <interface type='network'>
	I0723 14:15:27.016863   29532 main.go:141] libmachine: (ha-533645-m03)       <source network='default'/>
	I0723 14:15:27.016878   29532 main.go:141] libmachine: (ha-533645-m03)       <model type='virtio'/>
	I0723 14:15:27.016890   29532 main.go:141] libmachine: (ha-533645-m03)     </interface>
	I0723 14:15:27.016901   29532 main.go:141] libmachine: (ha-533645-m03)     <serial type='pty'>
	I0723 14:15:27.016913   29532 main.go:141] libmachine: (ha-533645-m03)       <target port='0'/>
	I0723 14:15:27.016923   29532 main.go:141] libmachine: (ha-533645-m03)     </serial>
	I0723 14:15:27.016934   29532 main.go:141] libmachine: (ha-533645-m03)     <console type='pty'>
	I0723 14:15:27.016942   29532 main.go:141] libmachine: (ha-533645-m03)       <target type='serial' port='0'/>
	I0723 14:15:27.016953   29532 main.go:141] libmachine: (ha-533645-m03)     </console>
	I0723 14:15:27.016965   29532 main.go:141] libmachine: (ha-533645-m03)     <rng model='virtio'>
	I0723 14:15:27.016977   29532 main.go:141] libmachine: (ha-533645-m03)       <backend model='random'>/dev/random</backend>
	I0723 14:15:27.016989   29532 main.go:141] libmachine: (ha-533645-m03)     </rng>
	I0723 14:15:27.016999   29532 main.go:141] libmachine: (ha-533645-m03)     
	I0723 14:15:27.017028   29532 main.go:141] libmachine: (ha-533645-m03)     
	I0723 14:15:27.017053   29532 main.go:141] libmachine: (ha-533645-m03)   </devices>
	I0723 14:15:27.017070   29532 main.go:141] libmachine: (ha-533645-m03) </domain>
	I0723 14:15:27.017074   29532 main.go:141] libmachine: (ha-533645-m03) 
	I0723 14:15:27.023268   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:bb:e8:b3 in network default
	I0723 14:15:27.023910   29532 main.go:141] libmachine: (ha-533645-m03) Ensuring networks are active...
	I0723 14:15:27.023941   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:27.024595   29532 main.go:141] libmachine: (ha-533645-m03) Ensuring network default is active
	I0723 14:15:27.024936   29532 main.go:141] libmachine: (ha-533645-m03) Ensuring network mk-ha-533645 is active
	I0723 14:15:27.025445   29532 main.go:141] libmachine: (ha-533645-m03) Getting domain xml...
	I0723 14:15:27.026306   29532 main.go:141] libmachine: (ha-533645-m03) Creating domain...
	I0723 14:15:28.248436   29532 main.go:141] libmachine: (ha-533645-m03) Waiting to get IP...
	I0723 14:15:28.249334   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:28.249733   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:28.249769   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:28.249722   30443 retry.go:31] will retry after 281.606831ms: waiting for machine to come up
	I0723 14:15:28.533482   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:28.534008   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:28.534030   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:28.533963   30443 retry.go:31] will retry after 385.152438ms: waiting for machine to come up
	I0723 14:15:28.920341   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:28.920872   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:28.920948   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:28.920792   30443 retry.go:31] will retry after 314.271869ms: waiting for machine to come up
	I0723 14:15:29.237053   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:29.237520   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:29.237550   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:29.237465   30443 retry.go:31] will retry after 471.988519ms: waiting for machine to come up
	I0723 14:15:29.711227   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:29.711743   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:29.711772   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:29.711695   30443 retry.go:31] will retry after 531.270874ms: waiting for machine to come up
	I0723 14:15:30.244371   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:30.244942   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:30.244970   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:30.244888   30443 retry.go:31] will retry after 770.53841ms: waiting for machine to come up
	I0723 14:15:31.016673   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:31.017006   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:31.017031   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:31.016973   30443 retry.go:31] will retry after 1.095715583s: waiting for machine to come up
	I0723 14:15:32.114498   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:32.115005   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:32.115035   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:32.114945   30443 retry.go:31] will retry after 1.280623697s: waiting for machine to come up
	I0723 14:15:33.397394   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:33.397826   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:33.397854   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:33.397779   30443 retry.go:31] will retry after 1.57925116s: waiting for machine to come up
	I0723 14:15:34.979429   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:34.979891   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:34.979929   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:34.979857   30443 retry.go:31] will retry after 1.686989757s: waiting for machine to come up
	I0723 14:15:36.668556   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:36.669180   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:36.669210   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:36.669127   30443 retry.go:31] will retry after 1.847102849s: waiting for machine to come up
	I0723 14:15:38.519171   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:38.519617   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:38.519670   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:38.519596   30443 retry.go:31] will retry after 2.787631648s: waiting for machine to come up
	I0723 14:15:41.308418   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:41.308777   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:41.308806   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:41.308742   30443 retry.go:31] will retry after 4.132953626s: waiting for machine to come up
	I0723 14:15:45.444189   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:45.444716   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find current IP address of domain ha-533645-m03 in network mk-ha-533645
	I0723 14:15:45.444747   29532 main.go:141] libmachine: (ha-533645-m03) DBG | I0723 14:15:45.444649   30443 retry.go:31] will retry after 4.976181345s: waiting for machine to come up
	I0723 14:15:50.425349   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.425885   29532 main.go:141] libmachine: (ha-533645-m03) Found IP for machine: 192.168.39.127
	I0723 14:15:50.425911   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has current primary IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.425919   29532 main.go:141] libmachine: (ha-533645-m03) Reserving static IP address...
	I0723 14:15:50.426246   29532 main.go:141] libmachine: (ha-533645-m03) DBG | unable to find host DHCP lease matching {name: "ha-533645-m03", mac: "52:54:00:76:92:af", ip: "192.168.39.127"} in network mk-ha-533645
	I0723 14:15:50.499815   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Getting to WaitForSSH function...
	I0723 14:15:50.499850   29532 main.go:141] libmachine: (ha-533645-m03) Reserved static IP address: 192.168.39.127
	I0723 14:15:50.499868   29532 main.go:141] libmachine: (ha-533645-m03) Waiting for SSH to be available...
	I0723 14:15:50.502999   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.503508   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.503536   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.503700   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Using SSH client type: external
	I0723 14:15:50.503728   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa (-rw-------)
	I0723 14:15:50.503754   29532 main.go:141] libmachine: (ha-533645-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 14:15:50.503768   29532 main.go:141] libmachine: (ha-533645-m03) DBG | About to run SSH command:
	I0723 14:15:50.503779   29532 main.go:141] libmachine: (ha-533645-m03) DBG | exit 0
	I0723 14:15:50.626137   29532 main.go:141] libmachine: (ha-533645-m03) DBG | SSH cmd err, output: <nil>: 
	I0723 14:15:50.626421   29532 main.go:141] libmachine: (ha-533645-m03) KVM machine creation complete!
	I0723 14:15:50.626763   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetConfigRaw
	I0723 14:15:50.627266   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:50.627475   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:50.627653   29532 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 14:15:50.627674   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:15:50.629326   29532 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 14:15:50.629345   29532 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 14:15:50.629354   29532 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 14:15:50.629363   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.632139   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.632548   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.632574   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.632713   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.632887   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.633106   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.633257   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.633417   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.633656   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.633671   29532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 14:15:50.733471   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:15:50.733491   29532 main.go:141] libmachine: Detecting the provisioner...
	I0723 14:15:50.733499   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.736505   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.736855   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.736883   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.737066   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.737269   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.737489   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.737656   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.737816   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.737991   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.738005   29532 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 14:15:50.838923   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 14:15:50.838988   29532 main.go:141] libmachine: found compatible host: buildroot
	I0723 14:15:50.838995   29532 main.go:141] libmachine: Provisioning with buildroot...
	I0723 14:15:50.839002   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:50.839223   29532 buildroot.go:166] provisioning hostname "ha-533645-m03"
	I0723 14:15:50.839244   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:50.839440   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.841695   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.842032   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.842048   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.842232   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.842428   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.842574   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.842678   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.842863   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.843040   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.843056   29532 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645-m03 && echo "ha-533645-m03" | sudo tee /etc/hostname
	I0723 14:15:50.965435   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645-m03
	
	I0723 14:15:50.965460   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:50.968290   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.968712   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:50.968739   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:50.968981   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:50.969180   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.969364   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:50.969521   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:50.969692   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:50.969870   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:50.969891   29532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:15:51.079197   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:15:51.079221   29532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:15:51.079239   29532 buildroot.go:174] setting up certificates
	I0723 14:15:51.079249   29532 provision.go:84] configureAuth start
	I0723 14:15:51.079261   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetMachineName
	I0723 14:15:51.079532   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:51.082328   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.082845   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.082877   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.083066   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:51.085073   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.085410   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.085443   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.085609   29532 provision.go:143] copyHostCerts
	I0723 14:15:51.085644   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:15:51.085680   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:15:51.085692   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:15:51.085774   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:15:51.085866   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:15:51.085892   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:15:51.085902   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:15:51.085938   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:15:51.086005   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:15:51.086028   29532 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:15:51.086036   29532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:15:51.086068   29532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:15:51.086136   29532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645-m03 san=[127.0.0.1 192.168.39.127 ha-533645-m03 localhost minikube]
	I0723 14:15:51.830193   29532 provision.go:177] copyRemoteCerts
	I0723 14:15:51.830248   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:15:51.830269   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:51.833287   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.833680   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.833708   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.833869   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:51.834069   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:51.834226   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:51.834352   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:51.917082   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:15:51.917158   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:15:51.943452   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:15:51.943522   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 14:15:51.966039   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:15:51.966108   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 14:15:51.988152   29532 provision.go:87] duration metric: took 908.889393ms to configureAuth
	I0723 14:15:51.988176   29532 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:15:51.988386   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:51.988464   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:51.991263   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.991654   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:51.991674   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:51.991863   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:51.992078   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:51.992242   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:51.992368   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:51.992526   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:51.992680   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:51.992695   29532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:15:52.259466   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:15:52.259515   29532 main.go:141] libmachine: Checking connection to Docker...
	I0723 14:15:52.259530   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetURL
	I0723 14:15:52.260794   29532 main.go:141] libmachine: (ha-533645-m03) DBG | Using libvirt version 6000000
	I0723 14:15:52.263044   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.263453   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.263480   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.263670   29532 main.go:141] libmachine: Docker is up and running!
	I0723 14:15:52.263693   29532 main.go:141] libmachine: Reticulating splines...
	I0723 14:15:52.263700   29532 client.go:171] duration metric: took 25.678736772s to LocalClient.Create
	I0723 14:15:52.263720   29532 start.go:167] duration metric: took 25.678790025s to libmachine.API.Create "ha-533645"
	I0723 14:15:52.263729   29532 start.go:293] postStartSetup for "ha-533645-m03" (driver="kvm2")
	I0723 14:15:52.263738   29532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:15:52.263751   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.263963   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:15:52.263983   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:52.266402   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.266756   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.266781   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.266891   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.267086   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.267240   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.267374   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:52.348302   29532 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:15:52.352200   29532 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:15:52.352220   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:15:52.352280   29532 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:15:52.352348   29532 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:15:52.352358   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:15:52.352435   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:15:52.361140   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:15:52.384321   29532 start.go:296] duration metric: took 120.578802ms for postStartSetup
	I0723 14:15:52.384391   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetConfigRaw
	I0723 14:15:52.385025   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:52.387835   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.388216   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.388242   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.388529   29532 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:15:52.388732   29532 start.go:128] duration metric: took 25.822399136s to createHost
	I0723 14:15:52.388758   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:52.391279   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.391669   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.391694   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.391840   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.392029   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.392191   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.392397   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.392546   29532 main.go:141] libmachine: Using SSH client type: native
	I0723 14:15:52.392727   29532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0723 14:15:52.392740   29532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:15:52.495009   29532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744152.474342471
	
	I0723 14:15:52.495027   29532 fix.go:216] guest clock: 1721744152.474342471
	I0723 14:15:52.495036   29532 fix.go:229] Guest: 2024-07-23 14:15:52.474342471 +0000 UTC Remote: 2024-07-23 14:15:52.388743425 +0000 UTC m=+173.749611455 (delta=85.599046ms)
	I0723 14:15:52.495054   29532 fix.go:200] guest clock delta is within tolerance: 85.599046ms
	I0723 14:15:52.495061   29532 start.go:83] releasing machines lock for "ha-533645-m03", held for 25.928862383s
	I0723 14:15:52.495079   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.495332   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:52.498049   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.498425   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.498451   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.500933   29532 out.go:177] * Found network options:
	I0723 14:15:52.502596   29532 out.go:177]   - NO_PROXY=192.168.39.103,192.168.39.182
	W0723 14:15:52.504006   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	W0723 14:15:52.504036   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:15:52.504052   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.504645   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.504857   29532 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:15:52.504964   29532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:15:52.505003   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	W0723 14:15:52.505045   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	W0723 14:15:52.505071   29532 proxy.go:119] fail to check proxy env: Error ip not in block
	I0723 14:15:52.505146   29532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:15:52.505169   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:15:52.508077   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508103   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508405   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.508430   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508456   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:52.508470   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:52.508566   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.508774   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:15:52.508778   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.508964   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:15:52.508971   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.509158   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:15:52.509152   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:52.509324   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:15:52.744633   29532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:15:52.750636   29532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:15:52.750711   29532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:15:52.766518   29532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 14:15:52.766537   29532 start.go:495] detecting cgroup driver to use...
	I0723 14:15:52.766591   29532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:15:52.782045   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:15:52.794198   29532 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:15:52.794266   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:15:52.807618   29532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:15:52.820716   29532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:15:52.943937   29532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:15:53.078325   29532 docker.go:233] disabling docker service ...
	I0723 14:15:53.078412   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:15:53.092946   29532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:15:53.106364   29532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:15:53.237962   29532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:15:53.357033   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:15:53.371439   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:15:53.389103   29532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:15:53.389165   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.399173   29532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:15:53.399238   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.408720   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.418077   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.428104   29532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:15:53.437770   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.447301   29532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.463326   29532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:15:53.473778   29532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:15:53.482338   29532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 14:15:53.482415   29532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 14:15:53.494050   29532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:15:53.502660   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:15:53.615201   29532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:15:53.750921   29532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:15:53.750992   29532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:15:53.756801   29532 start.go:563] Will wait 60s for crictl version
	I0723 14:15:53.756862   29532 ssh_runner.go:195] Run: which crictl
	I0723 14:15:53.760286   29532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:15:53.795682   29532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:15:53.795748   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:15:53.825041   29532 ssh_runner.go:195] Run: crio --version
	I0723 14:15:53.856964   29532 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:15:53.858485   29532 out.go:177]   - env NO_PROXY=192.168.39.103
	I0723 14:15:53.859757   29532 out.go:177]   - env NO_PROXY=192.168.39.103,192.168.39.182
	I0723 14:15:53.860814   29532 main.go:141] libmachine: (ha-533645-m03) Calling .GetIP
	I0723 14:15:53.863390   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:53.863860   29532 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:15:53.863889   29532 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:15:53.864075   29532 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:15:53.867881   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:15:53.879914   29532 mustload.go:65] Loading cluster: ha-533645
	I0723 14:15:53.880186   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:15:53.880561   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:53.880596   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:53.896041   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0723 14:15:53.896446   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:53.896856   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:53.896875   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:53.897194   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:53.897387   29532 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:15:53.899415   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:15:53.899790   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:53.899834   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:53.914519   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I0723 14:15:53.914883   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:53.915342   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:53.915362   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:53.915645   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:53.915822   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:15:53.915963   29532 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.127
	I0723 14:15:53.915975   29532 certs.go:194] generating shared ca certs ...
	I0723 14:15:53.915993   29532 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:15:53.916110   29532 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:15:53.916147   29532 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:15:53.916155   29532 certs.go:256] generating profile certs ...
	I0723 14:15:53.916219   29532 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:15:53.916244   29532 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3
	I0723 14:15:53.916254   29532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.182 192.168.39.127 192.168.39.254]
	I0723 14:15:54.010349   29532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3 ...
	I0723 14:15:54.010376   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3: {Name:mka157d08daeddba13fb0dc4d069c66ea442b999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:15:54.010596   29532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3 ...
	I0723 14:15:54.010614   29532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3: {Name:mkb672f50ec344593a19ac7e5590865fbf2b75c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:15:54.010689   29532 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.6f82c0d3 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:15:54.010819   29532 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.6f82c0d3 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:15:54.010939   29532 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:15:54.010954   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:15:54.010966   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:15:54.010976   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:15:54.010986   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:15:54.010995   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:15:54.011007   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:15:54.011020   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:15:54.011033   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:15:54.011078   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:15:54.011103   29532 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:15:54.011113   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:15:54.011132   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:15:54.011155   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:15:54.011176   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:15:54.011212   29532 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:15:54.011237   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.011250   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.011262   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.011295   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:15:54.014849   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:54.015284   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:15:54.015320   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:54.015460   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:15:54.015691   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:15:54.015847   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:15:54.015989   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:15:54.098822   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0723 14:15:54.104673   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0723 14:15:54.116899   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0723 14:15:54.120808   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0723 14:15:54.130327   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0723 14:15:54.134004   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0723 14:15:54.143628   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0723 14:15:54.147461   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0723 14:15:54.157233   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0723 14:15:54.161305   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0723 14:15:54.171626   29532 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0723 14:15:54.176014   29532 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0723 14:15:54.186368   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:15:54.210240   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:15:54.235403   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:15:54.259299   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:15:54.282611   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0723 14:15:54.304014   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 14:15:54.325639   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:15:54.347678   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:15:54.369722   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:15:54.392787   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:15:54.420476   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:15:54.442992   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0723 14:15:54.459183   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0723 14:15:54.475286   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0723 14:15:54.491900   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0723 14:15:54.508182   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0723 14:15:54.524424   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0723 14:15:54.540801   29532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0723 14:15:54.556879   29532 ssh_runner.go:195] Run: openssl version
	I0723 14:15:54.562157   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:15:54.571962   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.576167   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.576211   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:15:54.582570   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:15:54.592778   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:15:54.603191   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.607659   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.607726   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:15:54.613448   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:15:54.624157   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:15:54.635881   29532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.641107   29532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.641177   29532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:15:54.646840   29532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:15:54.657916   29532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:15:54.662016   29532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:15:54.662062   29532 kubeadm.go:934] updating node {m03 192.168.39.127 8443 v1.30.3 crio true true} ...
	I0723 14:15:54.662147   29532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:15:54.662177   29532 kube-vip.go:115] generating kube-vip config ...
	I0723 14:15:54.662215   29532 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:15:54.679594   29532 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:15:54.679668   29532 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:15:54.679722   29532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:15:54.690369   29532 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0723 14:15:54.690436   29532 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0723 14:15:54.700621   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0723 14:15:54.700649   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:15:54.700653   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0723 14:15:54.700668   29532 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0723 14:15:54.700681   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:15:54.700686   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:15:54.700718   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0723 14:15:54.700728   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0723 14:15:54.708166   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0723 14:15:54.708199   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0723 14:15:54.735401   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0723 14:15:54.735408   29532 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:15:54.735452   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0723 14:15:54.735592   29532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0723 14:15:54.777898   29532 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0723 14:15:54.777939   29532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0723 14:15:55.570945   29532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0723 14:15:55.580996   29532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0723 14:15:55.598116   29532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:15:55.615486   29532 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0723 14:15:55.631180   29532 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:15:55.634776   29532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:15:55.648638   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:15:55.778734   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:15:55.795591   29532 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:15:55.796076   29532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:15:55.796127   29532 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:15:55.813989   29532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0723 14:15:55.814436   29532 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:15:55.814929   29532 main.go:141] libmachine: Using API Version  1
	I0723 14:15:55.814950   29532 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:15:55.815292   29532 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:15:55.815488   29532 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:15:55.815637   29532 start.go:317] joinCluster: &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:15:55.815752   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0723 14:15:55.815770   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:15:55.818827   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:55.819185   29532 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:15:55.819212   29532 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:15:55.819386   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:15:55.819580   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:15:55.819760   29532 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:15:55.819945   29532 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:15:55.978832   29532 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:15:55.978880   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1mxm0a.dzsiup6q6ovj1n1x --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0723 14:16:20.127146   29532 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1mxm0a.dzsiup6q6ovj1n1x --discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-533645-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (24.148216007s)
	I0723 14:16:20.127180   29532 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0723 14:16:20.731680   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-533645-m03 minikube.k8s.io/updated_at=2024_07_23T14_16_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=ha-533645 minikube.k8s.io/primary=false
	I0723 14:16:20.857972   29532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-533645-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0723 14:16:20.972765   29532 start.go:319] duration metric: took 25.157124447s to joinCluster
	I0723 14:16:20.972861   29532 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:16:20.973197   29532 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:16:20.974580   29532 out.go:177] * Verifying Kubernetes components...
	I0723 14:16:20.975841   29532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:16:21.239954   29532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:16:21.303330   29532 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:16:21.303668   29532 kapi.go:59] client config for ha-533645: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.crt", KeyFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key", CAFile:"/home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0723 14:16:21.303741   29532 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.103:8443
	I0723 14:16:21.303972   29532 node_ready.go:35] waiting up to 6m0s for node "ha-533645-m03" to be "Ready" ...
	I0723 14:16:21.304045   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:21.304055   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:21.304065   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:21.304073   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:21.307424   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:21.804744   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:21.804766   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:21.804775   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:21.804778   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:21.808098   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:22.305126   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:22.305171   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:22.305183   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:22.305189   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:22.310070   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:22.804717   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:22.804737   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:22.804744   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:22.804748   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:22.807630   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:23.305068   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:23.305091   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:23.305099   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:23.305104   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:23.308317   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:23.309043   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:23.805058   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:23.805076   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:23.805084   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:23.805088   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:23.808929   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:24.305048   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:24.305068   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:24.305076   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:24.305081   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:24.308279   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:24.804928   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:24.804948   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:24.804956   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:24.804962   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:24.810954   29532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0723 14:16:25.305165   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:25.305189   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:25.305199   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:25.305205   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:25.308482   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:25.309105   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:25.804654   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:25.804674   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:25.804683   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:25.804688   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:25.808057   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:26.304413   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:26.304434   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:26.304445   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:26.304450   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:26.307908   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:26.804220   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:26.804242   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:26.804249   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:26.804253   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:26.807487   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:27.304218   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:27.304240   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:27.304250   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:27.304255   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:27.308253   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:27.309136   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:27.804426   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:27.804447   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:27.804457   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:27.804462   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:27.807997   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:28.304354   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:28.304373   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:28.304381   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:28.304385   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:28.307504   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:28.804135   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:28.804158   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:28.804168   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:28.804173   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:28.808162   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:29.305155   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:29.305176   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:29.305184   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:29.305187   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:29.308417   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:29.804230   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:29.804250   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:29.804258   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:29.804263   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:29.807379   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:29.807850   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:30.304219   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:30.304240   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:30.304249   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:30.304252   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:30.307960   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:30.804541   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:30.804563   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:30.804571   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:30.804575   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:30.809317   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:31.304988   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:31.305011   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:31.305021   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:31.305027   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:31.308405   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:31.804660   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:31.804681   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:31.804688   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:31.804692   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:31.808192   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:31.808803   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:32.304153   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:32.304185   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:32.304192   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:32.304196   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:32.307367   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:32.804344   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:32.804367   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:32.804376   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:32.804381   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:32.807377   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:33.304817   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:33.304839   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:33.304846   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:33.304851   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:33.308240   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:33.804976   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:33.804994   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:33.805002   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:33.805008   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:33.808620   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:33.809302   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:34.304479   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:34.304501   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:34.304511   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:34.304517   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:34.308172   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:34.804152   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:34.804179   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:34.804190   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:34.804196   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:34.807482   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:35.305018   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:35.305043   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:35.305055   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:35.305064   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:35.308408   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:35.804906   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:35.804930   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:35.804938   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:35.804943   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:35.808166   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:36.304597   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:36.304621   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:36.304633   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:36.304639   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:36.308300   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:36.309168   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:36.804950   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:36.804974   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:36.804986   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:36.804992   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:36.808398   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:37.304343   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:37.304366   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:37.304377   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:37.304385   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:37.308121   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:37.805070   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:37.805090   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:37.805100   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:37.805106   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:37.808319   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:38.304879   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:38.304902   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:38.304909   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:38.304914   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:38.308282   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:38.805003   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:38.805021   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:38.805029   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:38.805032   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:38.808215   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:38.808736   29532 node_ready.go:53] node "ha-533645-m03" has status "Ready":"False"
	I0723 14:16:39.305053   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:39.305078   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.305088   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.305093   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.311321   29532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0723 14:16:39.311922   29532 node_ready.go:49] node "ha-533645-m03" has status "Ready":"True"
	I0723 14:16:39.311950   29532 node_ready.go:38] duration metric: took 18.007961675s for node "ha-533645-m03" to be "Ready" ...
	I0723 14:16:39.311961   29532 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:16:39.312035   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:39.312047   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.312056   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.312061   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.319892   29532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0723 14:16:39.326251   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.326338   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nrvbf
	I0723 14:16:39.326348   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.326355   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.326359   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.329926   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:39.330888   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:39.330905   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.330915   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.330920   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.333435   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.334052   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.334071   29532 pod_ready.go:81] duration metric: took 7.786961ms for pod "coredns-7db6d8ff4d-nrvbf" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.334081   29532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.334146   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-s6xzz
	I0723 14:16:39.334156   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.334168   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.334177   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.336908   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.337747   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:39.337761   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.337770   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.337776   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.340573   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.340926   29532 pod_ready.go:92] pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.340940   29532 pod_ready.go:81] duration metric: took 6.851025ms for pod "coredns-7db6d8ff4d-s6xzz" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.340951   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.340996   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645
	I0723 14:16:39.341005   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.341015   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.341022   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.343119   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.343603   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:39.343615   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.343624   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.343629   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.346126   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.346601   29532 pod_ready.go:92] pod "etcd-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.346618   29532 pod_ready.go:81] duration metric: took 5.659492ms for pod "etcd-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.346627   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.346684   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m02
	I0723 14:16:39.346693   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.346704   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.346711   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.348901   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.349327   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:39.349339   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.349348   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.349354   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.351431   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:39.351913   29532 pod_ready.go:92] pod "etcd-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.351928   29532 pod_ready.go:81] duration metric: took 5.293908ms for pod "etcd-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.351938   29532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.505161   29532 request.go:629] Waited for 153.168219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m03
	I0723 14:16:39.505237   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-533645-m03
	I0723 14:16:39.505245   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.505257   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.505268   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.508805   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:39.705500   29532 request.go:629] Waited for 195.995675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:39.705579   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:39.705591   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.705599   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.705607   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.709091   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:39.710129   29532 pod_ready.go:92] pod "etcd-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:39.710148   29532 pod_ready.go:81] duration metric: took 358.203577ms for pod "etcd-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.710165   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:39.905285   29532 request.go:629] Waited for 195.046973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:16:39.905336   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645
	I0723 14:16:39.905341   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:39.905347   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:39.905350   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:39.908659   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.105745   29532 request.go:629] Waited for 196.382777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:40.105808   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:40.105814   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.105821   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.105825   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.109266   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.109811   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:40.109829   29532 pod_ready.go:81] duration metric: took 399.655068ms for pod "kube-apiserver-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.109841   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.305908   29532 request.go:629] Waited for 195.988243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:16:40.305969   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m02
	I0723 14:16:40.305977   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.305987   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.305994   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.309384   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.505684   29532 request.go:629] Waited for 195.400548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:40.505739   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:40.505744   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.505749   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.505753   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.509739   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.510452   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:40.510473   29532 pod_ready.go:81] duration metric: took 400.624465ms for pod "kube-apiserver-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.510487   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.705996   29532 request.go:629] Waited for 195.443515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m03
	I0723 14:16:40.706051   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-533645-m03
	I0723 14:16:40.706057   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.706064   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.706069   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.709564   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.905699   29532 request.go:629] Waited for 195.294921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:40.905760   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:40.905767   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:40.905777   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:40.905782   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:40.909452   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:40.909944   29532 pod_ready.go:92] pod "kube-apiserver-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:40.909961   29532 pod_ready.go:81] duration metric: took 399.468318ms for pod "kube-apiserver-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:40.909971   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.105133   29532 request.go:629] Waited for 195.096233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:16:41.105213   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645
	I0723 14:16:41.105220   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.105229   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.105237   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.108964   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:41.305464   29532 request.go:629] Waited for 195.868451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:41.305532   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:41.305538   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.305546   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.305550   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.308775   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:41.309478   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:41.309496   29532 pod_ready.go:81] duration metric: took 399.518903ms for pod "kube-controller-manager-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.309505   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.505757   29532 request.go:629] Waited for 196.172173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:16:41.505829   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m02
	I0723 14:16:41.505837   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.505849   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.505861   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.510166   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:41.705113   29532 request.go:629] Waited for 194.242853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:41.705184   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:41.705193   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.705206   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.705217   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.713605   29532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0723 14:16:41.714305   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:41.714329   29532 pod_ready.go:81] duration metric: took 404.816581ms for pod "kube-controller-manager-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.714343   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:41.905444   29532 request.go:629] Waited for 191.011459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m03
	I0723 14:16:41.905531   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-533645-m03
	I0723 14:16:41.905542   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:41.905557   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:41.905567   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:41.908965   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.106141   29532 request.go:629] Waited for 196.385763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:42.106193   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:42.106198   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.106206   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.106210   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.109483   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.109967   29532 pod_ready.go:92] pod "kube-controller-manager-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:42.109983   29532 pod_ready.go:81] duration metric: took 395.632651ms for pod "kube-controller-manager-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.109991   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.305153   29532 request.go:629] Waited for 195.091701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:16:42.305204   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w
	I0723 14:16:42.305209   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.305216   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.305220   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.308537   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.505750   29532 request.go:629] Waited for 196.37531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:42.505809   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:42.505815   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.505826   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.505830   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.509049   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.509630   29532 pod_ready.go:92] pod "kube-proxy-9wh4w" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:42.509652   29532 pod_ready.go:81] duration metric: took 399.65434ms for pod "kube-proxy-9wh4w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.509661   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.705846   29532 request.go:629] Waited for 196.113608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:16:42.705921   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p25cg
	I0723 14:16:42.705930   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.705944   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.705951   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.709128   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.905073   29532 request.go:629] Waited for 195.264685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:42.905146   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:42.905151   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:42.905158   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:42.905162   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:42.908373   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:42.908958   29532 pod_ready.go:92] pod "kube-proxy-p25cg" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:42.908972   29532 pod_ready.go:81] duration metric: took 399.30612ms for pod "kube-proxy-p25cg" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:42.908982   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsk2w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.105044   29532 request.go:629] Waited for 196.001396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsk2w
	I0723 14:16:43.105140   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsk2w
	I0723 14:16:43.105151   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.105160   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.105171   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.108726   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:43.306047   29532 request.go:629] Waited for 196.381996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:43.306102   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:43.306107   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.306122   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.306140   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.309423   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:43.309947   29532 pod_ready.go:92] pod "kube-proxy-xsk2w" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:43.309970   29532 pod_ready.go:81] duration metric: took 400.979959ms for pod "kube-proxy-xsk2w" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.309981   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.506079   29532 request.go:629] Waited for 196.029634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:16:43.506131   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645
	I0723 14:16:43.506139   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.506147   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.506151   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.509315   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:43.706043   29532 request.go:629] Waited for 196.207662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:43.706105   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645
	I0723 14:16:43.706112   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.706121   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.706129   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.708973   29532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0723 14:16:43.709736   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:43.709751   29532 pod_ready.go:81] duration metric: took 399.764828ms for pod "kube-scheduler-ha-533645" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.709759   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:43.905765   29532 request.go:629] Waited for 195.951609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:16:43.905822   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m02
	I0723 14:16:43.905829   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:43.905839   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:43.905846   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:43.909170   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.105832   29532 request.go:629] Waited for 195.539296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:44.105904   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m02
	I0723 14:16:44.105915   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.105926   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.105936   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.109197   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.109691   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:44.109706   29532 pod_ready.go:81] duration metric: took 399.940415ms for pod "kube-scheduler-ha-533645-m02" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:44.109714   29532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:44.305862   29532 request.go:629] Waited for 196.082514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m03
	I0723 14:16:44.305933   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645-m03
	I0723 14:16:44.305939   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.305947   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.305953   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.309634   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.505776   29532 request.go:629] Waited for 195.381264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:44.505825   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes/ha-533645-m03
	I0723 14:16:44.505830   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.505840   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.505851   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.509375   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.509973   29532 pod_ready.go:92] pod "kube-scheduler-ha-533645-m03" in "kube-system" namespace has status "Ready":"True"
	I0723 14:16:44.509995   29532 pod_ready.go:81] duration metric: took 400.274164ms for pod "kube-scheduler-ha-533645-m03" in "kube-system" namespace to be "Ready" ...
	I0723 14:16:44.510008   29532 pod_ready.go:38] duration metric: took 5.198035353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:16:44.510024   29532 api_server.go:52] waiting for apiserver process to appear ...
	I0723 14:16:44.510083   29532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:16:44.525393   29532 api_server.go:72] duration metric: took 23.552497184s to wait for apiserver process to appear ...
	I0723 14:16:44.525418   29532 api_server.go:88] waiting for apiserver healthz status ...
	I0723 14:16:44.525438   29532 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0723 14:16:44.529527   29532 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0723 14:16:44.529609   29532 round_trippers.go:463] GET https://192.168.39.103:8443/version
	I0723 14:16:44.529619   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.529631   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.529640   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.530449   29532 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0723 14:16:44.530529   29532 api_server.go:141] control plane version: v1.30.3
	I0723 14:16:44.530553   29532 api_server.go:131] duration metric: took 5.128474ms to wait for apiserver health ...
	I0723 14:16:44.530567   29532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 14:16:44.706031   29532 request.go:629] Waited for 175.341019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:44.706120   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:44.706128   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.706138   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.706148   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.713376   29532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0723 14:16:44.721248   29532 system_pods.go:59] 24 kube-system pods found
	I0723 14:16:44.721276   29532 system_pods.go:61] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:16:44.721282   29532 system_pods.go:61] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:16:44.721286   29532 system_pods.go:61] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:16:44.721302   29532 system_pods.go:61] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:16:44.721306   29532 system_pods.go:61] "etcd-ha-533645-m03" [3ec29d59-0196-4ebf-ac28-f70415297b7c] Running
	I0723 14:16:44.721309   29532 system_pods.go:61] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:16:44.721312   29532 system_pods.go:61] "kindnet-99qsf" [b7121912-e364-489d-ae7d-b762094fade9] Running
	I0723 14:16:44.721316   29532 system_pods.go:61] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:16:44.721322   29532 system_pods.go:61] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:16:44.721325   29532 system_pods.go:61] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:16:44.721328   29532 system_pods.go:61] "kube-apiserver-ha-533645-m03" [264831e9-6816-45a8-b917-ef003a6aefd8] Running
	I0723 14:16:44.721331   29532 system_pods.go:61] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:16:44.721337   29532 system_pods.go:61] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:16:44.721340   29532 system_pods.go:61] "kube-controller-manager-ha-533645-m03" [d3604797-9120-4668-93c6-8c5325f3854a] Running
	I0723 14:16:44.721346   29532 system_pods.go:61] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:16:44.721349   29532 system_pods.go:61] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:16:44.721352   29532 system_pods.go:61] "kube-proxy-xsk2w" [28febb11-2841-47d3-ae98-4f53347e568d] Running
	I0723 14:16:44.721355   29532 system_pods.go:61] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:16:44.721358   29532 system_pods.go:61] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:16:44.721362   29532 system_pods.go:61] "kube-scheduler-ha-533645-m03" [92b55f29-a3c2-418b-9575-b2a60e52ad62] Running
	I0723 14:16:44.721367   29532 system_pods.go:61] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:16:44.721369   29532 system_pods.go:61] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:16:44.721372   29532 system_pods.go:61] "kube-vip-ha-533645-m03" [ffece806-d630-4ffe-9a91-9c94311508f0] Running
	I0723 14:16:44.721375   29532 system_pods.go:61] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:16:44.721380   29532 system_pods.go:74] duration metric: took 190.805076ms to wait for pod list to return data ...
	I0723 14:16:44.721389   29532 default_sa.go:34] waiting for default service account to be created ...
	I0723 14:16:44.905823   29532 request.go:629] Waited for 184.361301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:16:44.905874   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/default/serviceaccounts
	I0723 14:16:44.905879   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:44.905886   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:44.905890   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:44.909086   29532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0723 14:16:44.909204   29532 default_sa.go:45] found service account: "default"
	I0723 14:16:44.909219   29532 default_sa.go:55] duration metric: took 187.824123ms for default service account to be created ...
	I0723 14:16:44.909230   29532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 14:16:45.105653   29532 request.go:629] Waited for 196.356753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:45.105734   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/namespaces/kube-system/pods
	I0723 14:16:45.105742   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:45.105752   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:45.105760   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:45.113451   29532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0723 14:16:45.119771   29532 system_pods.go:86] 24 kube-system pods found
	I0723 14:16:45.119797   29532 system_pods.go:89] "coredns-7db6d8ff4d-nrvbf" [ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad] Running
	I0723 14:16:45.119803   29532 system_pods.go:89] "coredns-7db6d8ff4d-s6xzz" [926a30df-71f1-48d7-92fb-ead057f2504d] Running
	I0723 14:16:45.119807   29532 system_pods.go:89] "etcd-ha-533645" [699ef924-6986-4195-bf41-c8a1c7de12cc] Running
	I0723 14:16:45.119811   29532 system_pods.go:89] "etcd-ha-533645-m02" [4b5143a3-0d38-4bd8-8ac9-b560835ed858] Running
	I0723 14:16:45.119815   29532 system_pods.go:89] "etcd-ha-533645-m03" [3ec29d59-0196-4ebf-ac28-f70415297b7c] Running
	I0723 14:16:45.119819   29532 system_pods.go:89] "kindnet-95sfh" [949aced9-1302-44dd-a5dc-2c61583579be] Running
	I0723 14:16:45.119823   29532 system_pods.go:89] "kindnet-99qsf" [b7121912-e364-489d-ae7d-b762094fade9] Running
	I0723 14:16:45.119828   29532 system_pods.go:89] "kindnet-99vkr" [495ea524-de15-401d-9ed3-fec375bc8042] Running
	I0723 14:16:45.119832   29532 system_pods.go:89] "kube-apiserver-ha-533645" [1a9e6e90-bfba-45ee-ac83-a946d928db81] Running
	I0723 14:16:45.119836   29532 system_pods.go:89] "kube-apiserver-ha-533645-m02" [0123ba05-45dc-4056-9a7a-dced0abf2235] Running
	I0723 14:16:45.119842   29532 system_pods.go:89] "kube-apiserver-ha-533645-m03" [264831e9-6816-45a8-b917-ef003a6aefd8] Running
	I0723 14:16:45.119849   29532 system_pods.go:89] "kube-controller-manager-ha-533645" [88a36a12-3838-4159-bf14-14d2ebecf51d] Running
	I0723 14:16:45.119854   29532 system_pods.go:89] "kube-controller-manager-ha-533645-m02" [bc145c15-cd1e-4547-b781-869817008499] Running
	I0723 14:16:45.119860   29532 system_pods.go:89] "kube-controller-manager-ha-533645-m03" [d3604797-9120-4668-93c6-8c5325f3854a] Running
	I0723 14:16:45.119866   29532 system_pods.go:89] "kube-proxy-9wh4w" [d9eb4982-e145-42cf-9a84-6013d7cdd3aa] Running
	I0723 14:16:45.119875   29532 system_pods.go:89] "kube-proxy-p25cg" [379aef41-5e99-476d-be83-968a1a007e44] Running
	I0723 14:16:45.119881   29532 system_pods.go:89] "kube-proxy-xsk2w" [28febb11-2841-47d3-ae98-4f53347e568d] Running
	I0723 14:16:45.119891   29532 system_pods.go:89] "kube-scheduler-ha-533645" [1adc432c-7b87-483b-9d1f-8deb3ba4ad81] Running
	I0723 14:16:45.119896   29532 system_pods.go:89] "kube-scheduler-ha-533645-m02" [0c0ca6ee-6c60-4002-a45f-4b344ed0038c] Running
	I0723 14:16:45.119900   29532 system_pods.go:89] "kube-scheduler-ha-533645-m03" [92b55f29-a3c2-418b-9575-b2a60e52ad62] Running
	I0723 14:16:45.119904   29532 system_pods.go:89] "kube-vip-ha-533645" [f21f8827-c6f7-4767-b7f5-f23c385e93ae] Running
	I0723 14:16:45.119910   29532 system_pods.go:89] "kube-vip-ha-533645-m02" [b2b262eb-a3d6-488e-9284-493c57c05660] Running
	I0723 14:16:45.119914   29532 system_pods.go:89] "kube-vip-ha-533645-m03" [ffece806-d630-4ffe-9a91-9c94311508f0] Running
	I0723 14:16:45.119918   29532 system_pods.go:89] "storage-provisioner" [52ab05ba-6dfc-4cc6-9085-8632f5cd7a66] Running
	I0723 14:16:45.119926   29532 system_pods.go:126] duration metric: took 210.68981ms to wait for k8s-apps to be running ...
	I0723 14:16:45.119936   29532 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 14:16:45.119987   29532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:16:45.134807   29532 system_svc.go:56] duration metric: took 14.864593ms WaitForService to wait for kubelet
	I0723 14:16:45.134832   29532 kubeadm.go:582] duration metric: took 24.161941777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:16:45.134850   29532 node_conditions.go:102] verifying NodePressure condition ...
	I0723 14:16:45.305160   29532 request.go:629] Waited for 170.246266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.103:8443/api/v1/nodes
	I0723 14:16:45.305209   29532 round_trippers.go:463] GET https://192.168.39.103:8443/api/v1/nodes
	I0723 14:16:45.305214   29532 round_trippers.go:469] Request Headers:
	I0723 14:16:45.305221   29532 round_trippers.go:473]     Accept: application/json, */*
	I0723 14:16:45.305229   29532 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0723 14:16:45.309334   29532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0723 14:16:45.310713   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:16:45.310735   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:16:45.310749   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:16:45.310753   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:16:45.310759   29532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 14:16:45.310764   29532 node_conditions.go:123] node cpu capacity is 2
	I0723 14:16:45.310770   29532 node_conditions.go:105] duration metric: took 175.91549ms to run NodePressure ...
	I0723 14:16:45.310783   29532 start.go:241] waiting for startup goroutines ...
	I0723 14:16:45.310811   29532 start.go:255] writing updated cluster config ...
	I0723 14:16:45.311165   29532 ssh_runner.go:195] Run: rm -f paused
	I0723 14:16:45.361735   29532 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 14:16:45.363591   29532 out.go:177] * Done! kubectl is now configured to use "ha-533645" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.482942199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f88745ea-f57b-4e32-825a-9ce1e31cd94a name=/runtime.v1.RuntimeService/Version
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.483873352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50f9dd29-410d-4e43-a93b-40b4d02ef39f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.484351636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744485484330313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50f9dd29-410d-4e43-a93b-40b4d02ef39f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.484855334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76ad988c-fb5c-40cc-aa86-2a5e915a6718 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.484919451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76ad988c-fb5c-40cc-aa86-2a5e915a6718 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.485207759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76ad988c-fb5c-40cc-aa86-2a5e915a6718 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.523714027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aeb28758-6d65-4995-bc8b-9e59f4c57ada name=/runtime.v1.RuntimeService/Version
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.523912890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aeb28758-6d65-4995-bc8b-9e59f4c57ada name=/runtime.v1.RuntimeService/Version
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.525308190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc1bb794-2a6e-4697-a8ee-bb6e9e154a75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.525834647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744485525810084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc1bb794-2a6e-4697-a8ee-bb6e9e154a75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.526563182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51befdae-6235-434f-b46b-f5a01e320686 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.526630364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51befdae-6235-434f-b46b-f5a01e320686 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.526881221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51befdae-6235-434f-b46b-f5a01e320686 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.573688300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21d6f383-dadd-4af6-b5a0-604d214a4754 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.573774617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21d6f383-dadd-4af6-b5a0-604d214a4754 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.575738635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ad62562-8e64-41bd-9022-a9b875d1ed0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.576454123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744485576420336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ad62562-8e64-41bd-9022-a9b875d1ed0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.577179683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3bfc9a4-0c9a-4a53-9740-8ea4b793323b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.577245415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3bfc9a4-0c9a-4a53-9740-8ea4b793323b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.577468482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3bfc9a4-0c9a-4a53-9740-8ea4b793323b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.580106896Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=25800c10-cf3b-4e2d-81b2-4bdb08f4d30c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.580396867Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-cd87c,Uid:c96075c6-138f-49ca-80af-c75e842c5852,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744207484085195,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:16:46.274827348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrvbf,Uid:ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721744046119612900,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.809780225Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744046111338700,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-23T14:14:05.803689838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-s6xzz,Uid:926a30df-71f1-48d7-92fb-ead057f2504d,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1721744046102839801,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.795958697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&PodSandboxMetadata{Name:kindnet-99vkr,Uid:495ea524-de15-401d-9ed3-fec375bc8042,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744029809613826,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.495076967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&PodSandboxMetadata{Name:kube-proxy-9wh4w,Uid:d9eb4982-e145-42cf-9a84-6013d7cdd3aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744029807470232,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.486258898Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&PodSandboxMetadata{Name:etcd-ha-533645,Uid:0116d3bd9333422ee3ba97043c03c966,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721744010425473909,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.103:2379,kubernetes.io/config.hash: 0116d3bd9333422ee3ba97043c03c966,kubernetes.io/config.seen: 2024-07-23T14:13:29.926300650Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-533645,Uid:a779b56396ae961a52b991bf79e41c79,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010422906791,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a779b56396ae961a52b991bf79e41c79,kubernetes.io/config.seen: 2024-07-23T14:13:29.926307126Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-533645,Uid:6de7f3c8e278c087425628d1b79c1d22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010404618151,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6de7f3c8e278c087425628d1b79c1d22,kubernetes.io/config.seen: 2024-07-23T14:13:29.926308682Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc95369f4505809db69c
a9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-533645,Uid:5693e50c5ce4a113bda653dc5ed85d89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010394627656,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.103:8443,kubernetes.io/config.hash: 5693e50c5ce4a113bda653dc5ed85d89,kubernetes.io/config.seen: 2024-07-23T14:13:29.926305579Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-533645,Uid:9f8fbea26449d1f00f1c8649ad6192db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721744010386223112,Label
s:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{kubernetes.io/config.hash: 9f8fbea26449d1f00f1c8649ad6192db,kubernetes.io/config.seen: 2024-07-23T14:13:29.926310051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=25800c10-cf3b-4e2d-81b2-4bdb08f4d30c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.581057824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8d17cc6-69e7-4f62-92f7-aa265043f58c name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.581330654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8d17cc6-69e7-4f62-92f7-aa265043f58c name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:21:25 ha-533645 crio[675]: time="2024-07-23 14:21:25.581834054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744210279591236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046410009417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744046339833119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f,PodSandboxId:bc76cb45947ed8547574e75373db182ce449b66e52c8bb9f5f4ac956a54a2e07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721744046289737575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721744034722715246,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172174403
0096397112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71,PodSandboxId:5e993964684c665c4ed31b343a43de75fb35f6f9b895a0be2fc6a000bfb64c53,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17217440138
75619370,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8fbea26449d1f00f1c8649ad6192db,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744010678763244,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744010650663684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c,PodSandboxId:c988725ad6a30b266e14602232f944b59ca929fba82a2bf6a622366724aee5be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744010681922650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3,PodSandboxId:bc95369f4505809db69ca9239d1b3f4f5b957b053de3da54f91b344d314161d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744010632032376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8d17cc6-69e7-4f62-92f7-aa265043f58c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	01ba0f9525e42       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   8e48b2467dce8       busybox-fc5497c4f-cd87c
	875e4306cadef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   67e32a92d8db3       coredns-7db6d8ff4d-nrvbf
	c272094e83046       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   a7feedf1d20d0       coredns-7db6d8ff4d-s6xzz
	ee98d1058de99       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   bc76cb45947ed       storage-provisioner
	204bd8ec5a070       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   08c39cde805a7       kindnet-99vkr
	1d5b9787b76de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   8cb09524a9c81       kube-proxy-9wh4w
	a208ea67ea379       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   5e993964684c6       kube-vip-ha-533645
	76bcad60035c6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   c988725ad6a30       kube-controller-manager-ha-533645
	081aaa8c6121c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   5d23d91d7b6c3       etcd-ha-533645
	7972ddd5dc32d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   17bfeff63e984       kube-scheduler-ha-533645
	e28c0ebf351e0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   bc95369f45058       kube-apiserver-ha-533645
	
	
	==> coredns [875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219] <==
	[INFO] 10.244.0.4:45062 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281951s
	[INFO] 10.244.0.4:39795 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002944241s
	[INFO] 10.244.0.4:33788 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001262s
	[INFO] 10.244.0.4:49837 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156655s
	[INFO] 10.244.0.4:37869 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111262s
	[INFO] 10.244.0.4:49583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187618s
	[INFO] 10.244.0.4:47929 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087678s
	[INFO] 10.244.2.2:38089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189381s
	[INFO] 10.244.2.2:42424 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002105089s
	[INFO] 10.244.2.2:44423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066747s
	[INFO] 10.244.1.2:32850 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770779s
	[INFO] 10.244.1.2:53620 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074588s
	[INFO] 10.244.1.2:33169 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009318s
	[INFO] 10.244.0.4:47876 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009475s
	[INFO] 10.244.2.2:42045 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092251s
	[INFO] 10.244.2.2:58530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137054s
	[INFO] 10.244.1.2:36698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167251s
	[INFO] 10.244.1.2:56144 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082378s
	[INFO] 10.244.1.2:37800 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138485s
	[INFO] 10.244.0.4:35800 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198717s
	[INFO] 10.244.0.4:55540 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113741s
	[INFO] 10.244.0.4:40041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000256677s
	[INFO] 10.244.1.2:51609 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132031s
	[INFO] 10.244.1.2:56610 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023971s
	[INFO] 10.244.1.2:42525 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084914s
	
	
	==> coredns [c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46] <==
	[INFO] 10.244.1.2:37484 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000094208s
	[INFO] 10.244.1.2:41079 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001672282s
	[INFO] 10.244.0.4:39127 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003361091s
	[INFO] 10.244.2.2:49158 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214966s
	[INFO] 10.244.2.2:52807 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149002s
	[INFO] 10.244.2.2:36170 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001374503s
	[INFO] 10.244.2.2:32919 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148684s
	[INFO] 10.244.2.2:33222 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130497s
	[INFO] 10.244.1.2:41720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132072s
	[INFO] 10.244.1.2:46039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136478s
	[INFO] 10.244.1.2:42265 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246596s
	[INFO] 10.244.1.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106745s
	[INFO] 10.244.1.2:42065 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173598s
	[INFO] 10.244.0.4:49694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097989s
	[INFO] 10.244.0.4:55332 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105679s
	[INFO] 10.244.0.4:55778 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057634s
	[INFO] 10.244.2.2:46643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151446s
	[INFO] 10.244.2.2:47656 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125295s
	[INFO] 10.244.1.2:33099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116864s
	[INFO] 10.244.0.4:43829 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233901s
	[INFO] 10.244.2.2:39898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180683s
	[INFO] 10.244.2.2:53185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148942s
	[INFO] 10.244.2.2:36301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000319769s
	[INFO] 10.244.2.2:54739 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011416s
	[INFO] 10.244.1.2:40740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148117s
	
	
	==> describe nodes <==
	Name:               ha-533645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_13_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:21:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:17:10 +0000   Tue, 23 Jul 2024 14:14:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-533645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 016f247620dd4139a26ce62f3129dde1
	  System UUID:                016f2476-20dd-4139-a26c-e62f3129dde1
	  Boot ID:                    218264a1-e12e-486d-a0c2-4ec59bc9cd30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cd87c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 coredns-7db6d8ff4d-nrvbf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m36s
	  kube-system                 coredns-7db6d8ff4d-s6xzz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m36s
	  kube-system                 etcd-ha-533645                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m50s
	  kube-system                 kindnet-99vkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m36s
	  kube-system                 kube-apiserver-ha-533645             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-controller-manager-ha-533645    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-proxy-9wh4w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-scheduler-ha-533645             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-vip-ha-533645                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m35s                  kube-proxy       
	  Normal  Starting                 7m56s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m55s (x7 over 7m56s)  kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m55s (x8 over 7m56s)  kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m55s (x8 over 7m56s)  kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m49s                  kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m49s                  kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m49s                  kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m37s                  node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal  NodeReady                7m20s                  kubelet          Node ha-533645 status is now: NodeReady
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	
	
	Name:               ha-533645-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_15_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:15:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:17:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 23 Jul 2024 14:17:05 +0000   Tue, 23 Jul 2024 14:18:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-533645-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 024bddfd48eb471b960e0dab2d3cd45b
	  System UUID:                024bddfd-48eb-471b-960e-0dab2d3cd45b
	  Boot ID:                    151372c0-a26e-4262-8f8f-67f30f77aff3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tlvlp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-533645-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m21s
	  kube-system                 kindnet-95sfh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m23s
	  kube-system                 kube-apiserver-ha-533645-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-controller-manager-ha-533645-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-p25cg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-scheduler-ha-533645-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-vip-ha-533645-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-533645-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m23s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  NodeNotReady             2m47s                  node-controller  Node ha-533645-m02 status is now: NodeNotReady
	
	
	Name:               ha-533645-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_16_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:16:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:21:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:17:18 +0000   Tue, 23 Jul 2024 14:16:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-533645-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58ea8f3065de44aea0aac5ffb591660d
	  System UUID:                58ea8f30-65de-44ae-a0aa-c5ffb591660d
	  Boot ID:                    a51eb8ca-a3c9-4da0-bf41-6ea9d59a8829
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kq2ww                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-533645-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kindnet-99qsf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m8s
	  kube-system                 kube-apiserver-ha-533645-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-controller-manager-ha-533645-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-proxy-xsk2w                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-ha-533645-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-vip-ha-533645-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node ha-533645-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal  RegisteredNode           4m50s                node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	
	
	Name:               ha-533645-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_17_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:17:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:21:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:17:56 +0000   Tue, 23 Jul 2024 14:17:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-533645-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d58ceb89e2492c9f4ada3b3365c263
	  System UUID:                c6d58ceb-89e2-492c-9f4a-da3b3365c263
	  Boot ID:                    02dbcde4-8925-40e5-a9f0-f49b7734fc1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-f4tkn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m
	  kube-system                 kube-proxy-nz528    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m54s            kube-proxy       
	  Normal  NodeHasSufficientMemory  4m (x2 over 4m)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x2 over 4m)  kubelet          Node ha-533645-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x2 over 4m)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s            node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal  RegisteredNode           3m56s            node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal  RegisteredNode           3m56s            node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal  NodeReady                3m40s            kubelet          Node ha-533645-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul23 14:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050205] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036036] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.689589] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.850291] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.556173] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.424464] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.065789] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058371] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.157255] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.139843] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.253665] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.906302] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.745369] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.058504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.271647] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.077951] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.844081] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.054308] kauditd_printk_skb: 34 callbacks suppressed
	[Jul23 14:15] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e] <==
	{"level":"warn","ts":"2024-07-23T14:21:25.83787Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.841634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.855406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.861994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.868785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.87332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.877358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.885369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.89158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.903218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.928578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.931321Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.938476Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.941291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.947338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.959395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.985109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.991519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.995248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:25.9984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:26.002753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:26.00443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:26.010561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:26.017103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:21:26.081597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:21:26 up 8 min,  0 users,  load average: 0.40, 0.29, 0.14
	Linux ha-533645 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493] <==
	I0723 14:20:55.723881       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:21:05.731678       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:21:05.731805       1 main.go:299] handling current node
	I0723 14:21:05.731837       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:21:05.731857       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:21:05.732017       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:21:05.732055       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:21:05.732188       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:21:05.732235       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:21:15.728841       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:21:15.728888       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:21:15.729020       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:21:15.729039       1 main.go:299] handling current node
	I0723 14:21:15.729051       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:21:15.729060       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:21:15.729362       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:21:15.729401       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:21:25.727217       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:21:25.727273       1 main.go:299] handling current node
	I0723 14:21:25.727289       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:21:25.727295       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:21:25.727488       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:21:25.727508       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:21:25.727563       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:21:25.727580       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3] <==
	W0723 14:13:35.234036       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103]
	I0723 14:13:35.235051       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:13:35.239042       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0723 14:13:35.345552       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0723 14:13:36.880343       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0723 14:13:36.910767       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0723 14:13:36.928200       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 14:13:49.204695       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0723 14:13:49.454603       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0723 14:16:51.018356       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53594: use of closed network connection
	E0723 14:16:51.203052       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53600: use of closed network connection
	E0723 14:16:51.390482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53628: use of closed network connection
	E0723 14:16:51.577002       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53648: use of closed network connection
	E0723 14:16:51.765569       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53664: use of closed network connection
	E0723 14:16:51.951051       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53682: use of closed network connection
	E0723 14:16:52.124107       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53704: use of closed network connection
	E0723 14:16:52.305698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53714: use of closed network connection
	E0723 14:16:52.477519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53730: use of closed network connection
	E0723 14:16:52.752630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53746: use of closed network connection
	E0723 14:16:52.955792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53764: use of closed network connection
	E0723 14:16:53.126631       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53784: use of closed network connection
	E0723 14:16:53.301502       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53802: use of closed network connection
	E0723 14:16:53.478848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53824: use of closed network connection
	E0723 14:16:53.647412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53850: use of closed network connection
	W0723 14:18:15.250667       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.127]
	
	
	==> kube-controller-manager [76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c] <==
	I0723 14:16:18.614450       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-533645-m03"
	I0723 14:16:46.277524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.174892ms"
	I0723 14:16:46.317741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.901363ms"
	I0723 14:16:46.484553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.669785ms"
	I0723 14:16:46.571277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.424291ms"
	I0723 14:16:46.630592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.176373ms"
	E0723 14:16:46.630662       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:16:46.649326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.545549ms"
	I0723 14:16:46.649885       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.79µs"
	I0723 14:16:46.734858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.646634ms"
	I0723 14:16:46.735172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="213.856µs"
	I0723 14:16:49.337385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.570376ms"
	I0723 14:16:49.337563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.438µs"
	I0723 14:16:50.457869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.14968ms"
	I0723 14:16:50.457952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.352µs"
	I0723 14:16:50.590664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.043605ms"
	I0723 14:16:50.590965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.8µs"
	E0723 14:17:26.166343       1 certificate_controller.go:146] Sync csr-5xkvd failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-5xkvd": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:17:26.447459       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-533645-m04\" does not exist"
	I0723 14:17:26.466812       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-533645-m04" podCIDRs=["10.244.3.0/24"]
	I0723 14:17:28.627640       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-533645-m04"
	I0723 14:17:46.647314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-533645-m04"
	I0723 14:18:38.667971       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-533645-m04"
	I0723 14:18:38.830910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.20937ms"
	I0723 14:18:38.831100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.783µs"
	
	
	==> kube-proxy [1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e] <==
	I0723 14:13:50.430698       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:13:50.446236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0723 14:13:50.513939       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:13:50.513988       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:13:50.514006       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:13:50.517541       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:13:50.517784       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:13:50.517815       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:13:50.523174       1 config.go:192] "Starting service config controller"
	I0723 14:13:50.523448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:13:50.523931       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:13:50.523955       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:13:50.524688       1 config.go:319] "Starting node config controller"
	I0723 14:13:50.524712       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:13:50.624948       1 shared_informer.go:320] Caches are synced for node config
	I0723 14:13:50.624996       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:13:50.625037       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090] <==
	W0723 14:13:34.320386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:13:34.320435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:13:34.479391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:13:34.479431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:13:34.514764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:13:34.514882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:13:34.617221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:13:34.617366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:13:34.719034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 14:13:34.719229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 14:13:34.730376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:13:34.730419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:13:34.812082       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:13:34.812162       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 14:13:37.973607       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0723 14:16:46.281408       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cd87c\": pod busybox-fc5497c4f-cd87c is already assigned to node \"ha-533645\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-cd87c" node="ha-533645"
	E0723 14:16:46.281593       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cd87c\": pod busybox-fc5497c4f-cd87c is already assigned to node \"ha-533645\"" pod="default/busybox-fc5497c4f-cd87c"
	E0723 14:17:26.517858       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nz528\": pod kube-proxy-nz528 is already assigned to node \"ha-533645-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nz528" node="ha-533645-m04"
	E0723 14:17:26.518053       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f058c988-f8e0-477d-9e96-73e0ee09d91e(kube-system/kube-proxy-nz528) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nz528"
	E0723 14:17:26.518185       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nz528\": pod kube-proxy-nz528 is already assigned to node \"ha-533645-m04\"" pod="kube-system/kube-proxy-nz528"
	I0723 14:17:26.518229       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nz528" node="ha-533645-m04"
	E0723 14:17:26.535903       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f4tkn\": pod kindnet-f4tkn is already assigned to node \"ha-533645-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-f4tkn" node="ha-533645-m04"
	E0723 14:17:26.538926       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2694466c-e2cd-480a-b713-2e1cd5cfdb00(kube-system/kindnet-f4tkn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f4tkn"
	E0723 14:17:26.539006       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f4tkn\": pod kindnet-f4tkn is already assigned to node \"ha-533645-m04\"" pod="kube-system/kindnet-f4tkn"
	I0723 14:17:26.539031       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f4tkn" node="ha-533645-m04"
	
	
	==> kubelet <==
	Jul 23 14:16:46 ha-533645 kubelet[1366]: I0723 14:16:46.273673    1366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nrvbf" podStartSLOduration=177.273606251 podStartE2EDuration="2m57.273606251s" podCreationTimestamp="2024-07-23 14:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-23 14:14:07.012954857 +0000 UTC m=+30.343295672" watchObservedRunningTime="2024-07-23 14:16:46.273606251 +0000 UTC m=+189.603947073"
	Jul 23 14:16:46 ha-533645 kubelet[1366]: I0723 14:16:46.274979    1366 topology_manager.go:215] "Topology Admit Handler" podUID="c96075c6-138f-49ca-80af-c75e842c5852" podNamespace="default" podName="busybox-fc5497c4f-cd87c"
	Jul 23 14:16:46 ha-533645 kubelet[1366]: W0723 14:16:46.283802    1366 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-533645" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-533645' and this object
	Jul 23 14:16:46 ha-533645 kubelet[1366]: E0723 14:16:46.284590    1366 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-533645" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-533645' and this object
	Jul 23 14:16:46 ha-533645 kubelet[1366]: I0723 14:16:46.365660    1366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjl9z\" (UniqueName: \"kubernetes.io/projected/c96075c6-138f-49ca-80af-c75e842c5852-kube-api-access-fjl9z\") pod \"busybox-fc5497c4f-cd87c\" (UID: \"c96075c6-138f-49ca-80af-c75e842c5852\") " pod="default/busybox-fc5497c4f-cd87c"
	Jul 23 14:17:36 ha-533645 kubelet[1366]: E0723 14:17:36.826847    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:17:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:17:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:17:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:17:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:18:36 ha-533645 kubelet[1366]: E0723 14:18:36.828476    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:18:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:18:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:18:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:18:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:19:36 ha-533645 kubelet[1366]: E0723 14:19:36.827754    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:19:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:19:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:19:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:19:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:20:36 ha-533645 kubelet[1366]: E0723 14:20:36.826766    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:20:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:20:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:20:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:20:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-533645 -n ha-533645
helpers_test.go:261: (dbg) Run:  kubectl --context ha-533645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (403.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-533645 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-533645 -v=7 --alsologtostderr
E0723 14:22:11.819305   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:22:39.501918   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-533645 -v=7 --alsologtostderr: exit status 82 (2m1.832947608s)

                                                
                                                
-- stdout --
	* Stopping node "ha-533645-m04"  ...
	* Stopping node "ha-533645-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:21:27.491313   35929 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:21:27.491467   35929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:27.491480   35929 out.go:304] Setting ErrFile to fd 2...
	I0723 14:21:27.491487   35929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:21:27.491705   35929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:21:27.491941   35929 out.go:298] Setting JSON to false
	I0723 14:21:27.492039   35929 mustload.go:65] Loading cluster: ha-533645
	I0723 14:21:27.492399   35929 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:21:27.492484   35929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:21:27.492672   35929 mustload.go:65] Loading cluster: ha-533645
	I0723 14:21:27.492802   35929 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:21:27.492844   35929 stop.go:39] StopHost: ha-533645-m04
	I0723 14:21:27.493263   35929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:27.493318   35929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:27.507884   35929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0723 14:21:27.508369   35929 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:27.508912   35929 main.go:141] libmachine: Using API Version  1
	I0723 14:21:27.508939   35929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:27.509353   35929 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:27.512164   35929 out.go:177] * Stopping node "ha-533645-m04"  ...
	I0723 14:21:27.513878   35929 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0723 14:21:27.513928   35929 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:21:27.514139   35929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0723 14:21:27.514174   35929 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:21:27.517320   35929 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:27.517818   35929 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:17:08 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:21:27.517859   35929 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:21:27.518036   35929 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:21:27.518241   35929 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:21:27.518401   35929 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:21:27.518536   35929 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:21:27.598159   35929 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0723 14:21:27.652885   35929 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0723 14:21:27.707489   35929 main.go:141] libmachine: Stopping "ha-533645-m04"...
	I0723 14:21:27.707526   35929 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:21:27.709030   35929 main.go:141] libmachine: (ha-533645-m04) Calling .Stop
	I0723 14:21:27.712695   35929 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 0/120
	I0723 14:21:28.852882   35929 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:21:28.854461   35929 main.go:141] libmachine: Machine "ha-533645-m04" was stopped.
	I0723 14:21:28.854477   35929 stop.go:75] duration metric: took 1.340602909s to stop
	I0723 14:21:28.854508   35929 stop.go:39] StopHost: ha-533645-m03
	I0723 14:21:28.854801   35929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:21:28.854862   35929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:21:28.869748   35929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I0723 14:21:28.870269   35929 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:21:28.870792   35929 main.go:141] libmachine: Using API Version  1
	I0723 14:21:28.870814   35929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:21:28.871132   35929 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:21:28.873262   35929 out.go:177] * Stopping node "ha-533645-m03"  ...
	I0723 14:21:28.874729   35929 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0723 14:21:28.874773   35929 main.go:141] libmachine: (ha-533645-m03) Calling .DriverName
	I0723 14:21:28.875042   35929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0723 14:21:28.875071   35929 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHHostname
	I0723 14:21:28.877744   35929 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:28.878202   35929 main.go:141] libmachine: (ha-533645-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:92:af", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:15:40 +0000 UTC Type:0 Mac:52:54:00:76:92:af Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-533645-m03 Clientid:01:52:54:00:76:92:af}
	I0723 14:21:28.878233   35929 main.go:141] libmachine: (ha-533645-m03) DBG | domain ha-533645-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:76:92:af in network mk-ha-533645
	I0723 14:21:28.878357   35929 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHPort
	I0723 14:21:28.878693   35929 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHKeyPath
	I0723 14:21:28.878862   35929 main.go:141] libmachine: (ha-533645-m03) Calling .GetSSHUsername
	I0723 14:21:28.879001   35929 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m03/id_rsa Username:docker}
	I0723 14:21:28.962937   35929 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0723 14:21:29.016192   35929 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0723 14:21:29.070362   35929 main.go:141] libmachine: Stopping "ha-533645-m03"...
	I0723 14:21:29.070406   35929 main.go:141] libmachine: (ha-533645-m03) Calling .GetState
	I0723 14:21:29.072147   35929 main.go:141] libmachine: (ha-533645-m03) Calling .Stop
	I0723 14:21:29.075800   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 0/120
	I0723 14:21:30.077584   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 1/120
	I0723 14:21:31.079342   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 2/120
	I0723 14:21:32.080835   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 3/120
	I0723 14:21:33.082522   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 4/120
	I0723 14:21:34.084519   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 5/120
	I0723 14:21:35.086445   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 6/120
	I0723 14:21:36.087812   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 7/120
	I0723 14:21:37.089670   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 8/120
	I0723 14:21:38.091319   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 9/120
	I0723 14:21:39.093791   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 10/120
	I0723 14:21:40.095407   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 11/120
	I0723 14:21:41.097114   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 12/120
	I0723 14:21:42.098895   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 13/120
	I0723 14:21:43.101558   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 14/120
	I0723 14:21:44.103731   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 15/120
	I0723 14:21:45.105230   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 16/120
	I0723 14:21:46.106630   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 17/120
	I0723 14:21:47.108362   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 18/120
	I0723 14:21:48.109999   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 19/120
	I0723 14:21:49.111800   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 20/120
	I0723 14:21:50.113487   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 21/120
	I0723 14:21:51.115128   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 22/120
	I0723 14:21:52.116542   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 23/120
	I0723 14:21:53.118334   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 24/120
	I0723 14:21:54.120614   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 25/120
	I0723 14:21:55.122964   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 26/120
	I0723 14:21:56.124960   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 27/120
	I0723 14:21:57.127071   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 28/120
	I0723 14:21:58.128645   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 29/120
	I0723 14:21:59.130687   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 30/120
	I0723 14:22:00.132611   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 31/120
	I0723 14:22:01.134155   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 32/120
	I0723 14:22:02.136016   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 33/120
	I0723 14:22:03.137417   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 34/120
	I0723 14:22:04.139362   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 35/120
	I0723 14:22:05.140676   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 36/120
	I0723 14:22:06.142105   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 37/120
	I0723 14:22:07.144326   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 38/120
	I0723 14:22:08.145638   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 39/120
	I0723 14:22:09.147455   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 40/120
	I0723 14:22:10.148755   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 41/120
	I0723 14:22:11.150276   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 42/120
	I0723 14:22:12.151596   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 43/120
	I0723 14:22:13.153336   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 44/120
	I0723 14:22:14.155361   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 45/120
	I0723 14:22:15.156866   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 46/120
	I0723 14:22:16.158283   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 47/120
	I0723 14:22:17.159750   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 48/120
	I0723 14:22:18.161273   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 49/120
	I0723 14:22:19.163248   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 50/120
	I0723 14:22:20.164674   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 51/120
	I0723 14:22:21.166033   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 52/120
	I0723 14:22:22.167594   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 53/120
	I0723 14:22:23.168889   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 54/120
	I0723 14:22:24.170892   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 55/120
	I0723 14:22:25.172585   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 56/120
	I0723 14:22:26.174194   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 57/120
	I0723 14:22:27.175713   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 58/120
	I0723 14:22:28.177119   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 59/120
	I0723 14:22:29.179029   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 60/120
	I0723 14:22:30.180450   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 61/120
	I0723 14:22:31.182036   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 62/120
	I0723 14:22:32.183516   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 63/120
	I0723 14:22:33.184811   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 64/120
	I0723 14:22:34.186326   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 65/120
	I0723 14:22:35.187753   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 66/120
	I0723 14:22:36.189210   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 67/120
	I0723 14:22:37.190790   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 68/120
	I0723 14:22:38.193018   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 69/120
	I0723 14:22:39.195084   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 70/120
	I0723 14:22:40.196579   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 71/120
	I0723 14:22:41.198000   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 72/120
	I0723 14:22:42.199433   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 73/120
	I0723 14:22:43.201034   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 74/120
	I0723 14:22:44.202871   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 75/120
	I0723 14:22:45.205190   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 76/120
	I0723 14:22:46.206661   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 77/120
	I0723 14:22:47.208860   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 78/120
	I0723 14:22:48.210451   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 79/120
	I0723 14:22:49.212191   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 80/120
	I0723 14:22:50.213592   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 81/120
	I0723 14:22:51.215030   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 82/120
	I0723 14:22:52.216626   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 83/120
	I0723 14:22:53.218020   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 84/120
	I0723 14:22:54.220074   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 85/120
	I0723 14:22:55.221519   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 86/120
	I0723 14:22:56.222846   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 87/120
	I0723 14:22:57.224283   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 88/120
	I0723 14:22:58.225663   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 89/120
	I0723 14:22:59.227543   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 90/120
	I0723 14:23:00.229164   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 91/120
	I0723 14:23:01.230628   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 92/120
	I0723 14:23:02.232080   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 93/120
	I0723 14:23:03.233513   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 94/120
	I0723 14:23:04.235216   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 95/120
	I0723 14:23:05.236610   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 96/120
	I0723 14:23:06.237921   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 97/120
	I0723 14:23:07.239623   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 98/120
	I0723 14:23:08.240859   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 99/120
	I0723 14:23:09.242491   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 100/120
	I0723 14:23:10.243639   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 101/120
	I0723 14:23:11.245195   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 102/120
	I0723 14:23:12.246340   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 103/120
	I0723 14:23:13.247946   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 104/120
	I0723 14:23:14.250027   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 105/120
	I0723 14:23:15.251542   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 106/120
	I0723 14:23:16.252750   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 107/120
	I0723 14:23:17.254225   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 108/120
	I0723 14:23:18.255822   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 109/120
	I0723 14:23:19.257787   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 110/120
	I0723 14:23:20.259353   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 111/120
	I0723 14:23:21.261002   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 112/120
	I0723 14:23:22.262507   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 113/120
	I0723 14:23:23.263662   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 114/120
	I0723 14:23:24.265385   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 115/120
	I0723 14:23:25.266876   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 116/120
	I0723 14:23:26.268127   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 117/120
	I0723 14:23:27.269493   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 118/120
	I0723 14:23:28.271257   35929 main.go:141] libmachine: (ha-533645-m03) Waiting for machine to stop 119/120
	I0723 14:23:29.272277   35929 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0723 14:23:29.272355   35929 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0723 14:23:29.274293   35929 out.go:177] 
	W0723 14:23:29.275771   35929 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0723 14:23:29.275791   35929 out.go:239] * 
	* 
	W0723 14:23:29.278068   35929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 14:23:29.282169   35929 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-533645 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-533645 --wait=true -v=7 --alsologtostderr
E0723 14:24:49.699328   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:26:12.746030   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:27:11.819135   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-533645 --wait=true -v=7 --alsologtostderr: (4m39.236377137s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-533645
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-533645 -n ha-533645
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-533645 logs -n 25: (1.760212059s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m02:/home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m04 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp testdata/cp-test.txt                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645:/home/docker/cp-test_ha-533645-m04_ha-533645.txt                      |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645 sudo cat                                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645.txt                                |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m02:/home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03:/home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m03 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-533645 node stop m02 -v=7                                                    | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-533645 node start m02 -v=7                                                   | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-533645 -v=7                                                          | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-533645 -v=7                                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-533645 --wait=true -v=7                                                   | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:23 UTC | 23 Jul 24 14:28 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-533645                                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:28 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:23:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:23:29.324808   36426 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:23:29.324928   36426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:23:29.324937   36426 out.go:304] Setting ErrFile to fd 2...
	I0723 14:23:29.324941   36426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:23:29.325124   36426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:23:29.325691   36426 out.go:298] Setting JSON to false
	I0723 14:23:29.326715   36426 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3955,"bootTime":1721740654,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:23:29.326779   36426 start.go:139] virtualization: kvm guest
	I0723 14:23:29.328827   36426 out.go:177] * [ha-533645] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 14:23:29.330412   36426 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:23:29.330472   36426 notify.go:220] Checking for updates...
	I0723 14:23:29.332869   36426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:23:29.334190   36426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:23:29.335786   36426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:23:29.337305   36426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:23:29.338672   36426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:23:29.340265   36426 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:23:29.340399   36426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:23:29.340874   36426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:23:29.340919   36426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:23:29.355845   36426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0723 14:23:29.356270   36426 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:23:29.356909   36426 main.go:141] libmachine: Using API Version  1
	I0723 14:23:29.356951   36426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:23:29.357262   36426 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:23:29.357443   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:23:29.391490   36426 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 14:23:29.392838   36426 start.go:297] selected driver: kvm2
	I0723 14:23:29.392855   36426 start.go:901] validating driver "kvm2" against &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:23:29.392994   36426 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:23:29.393365   36426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:23:29.393446   36426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 14:23:29.407999   36426 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 14:23:29.408642   36426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:23:29.408671   36426 cni.go:84] Creating CNI manager for ""
	I0723 14:23:29.408678   36426 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0723 14:23:29.408734   36426 start.go:340] cluster config:
	{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:23:29.408850   36426 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:23:29.410466   36426 out.go:177] * Starting "ha-533645" primary control-plane node in "ha-533645" cluster
	I0723 14:23:29.411642   36426 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:23:29.411677   36426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 14:23:29.411683   36426 cache.go:56] Caching tarball of preloaded images
	I0723 14:23:29.411748   36426 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:23:29.411758   36426 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:23:29.411882   36426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:23:29.412058   36426 start.go:360] acquireMachinesLock for ha-533645: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:23:29.412095   36426 start.go:364] duration metric: took 20.583µs to acquireMachinesLock for "ha-533645"
	I0723 14:23:29.412107   36426 start.go:96] Skipping create...Using existing machine configuration
	I0723 14:23:29.412114   36426 fix.go:54] fixHost starting: 
	I0723 14:23:29.412355   36426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:23:29.412385   36426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:23:29.427394   36426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0723 14:23:29.427881   36426 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:23:29.428390   36426 main.go:141] libmachine: Using API Version  1
	I0723 14:23:29.428411   36426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:23:29.428807   36426 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:23:29.429009   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:23:29.429161   36426 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:23:29.430771   36426 fix.go:112] recreateIfNeeded on ha-533645: state=Running err=<nil>
	W0723 14:23:29.430788   36426 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 14:23:29.432622   36426 out.go:177] * Updating the running kvm2 "ha-533645" VM ...
	I0723 14:23:29.433868   36426 machine.go:94] provisionDockerMachine start ...
	I0723 14:23:29.433889   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:23:29.434084   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.436634   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.437024   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.437055   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.437192   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:29.437367   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.437527   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.437672   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:29.437898   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:29.438063   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:29.438072   36426 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 14:23:29.555273   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645
	
	I0723 14:23:29.555297   36426 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:23:29.555533   36426 buildroot.go:166] provisioning hostname "ha-533645"
	I0723 14:23:29.555549   36426 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:23:29.555746   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.558041   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.558428   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.558447   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.558593   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:29.558840   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.559009   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.559150   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:29.559320   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:29.559477   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:29.559489   36426 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645 && echo "ha-533645" | sudo tee /etc/hostname
	I0723 14:23:29.692003   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645
	
	I0723 14:23:29.692032   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.694666   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.695010   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.695039   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.695199   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:29.695404   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.695617   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.695810   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:29.696039   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:29.696235   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:29.696257   36426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:23:29.811069   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:23:29.811106   36426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:23:29.811142   36426 buildroot.go:174] setting up certificates
	I0723 14:23:29.811154   36426 provision.go:84] configureAuth start
	I0723 14:23:29.811166   36426 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:23:29.811433   36426 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:23:29.814075   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.814470   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.814491   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.814637   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.817096   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.817551   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.817583   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.817648   36426 provision.go:143] copyHostCerts
	I0723 14:23:29.817692   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:23:29.817725   36426 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:23:29.817734   36426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:23:29.817801   36426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:23:29.817872   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:23:29.817894   36426 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:23:29.817901   36426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:23:29.817924   36426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:23:29.817961   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:23:29.817977   36426 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:23:29.817983   36426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:23:29.818002   36426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:23:29.818044   36426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645 san=[127.0.0.1 192.168.39.103 ha-533645 localhost minikube]
	I0723 14:23:30.016580   36426 provision.go:177] copyRemoteCerts
	I0723 14:23:30.016645   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:23:30.016667   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:30.019382   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.019744   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:30.019786   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.019937   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:30.020137   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:30.020334   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:30.020480   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:23:30.111015   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:23:30.111100   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:23:30.139839   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:23:30.139932   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0723 14:23:30.166800   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:23:30.166865   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 14:23:30.192758   36426 provision.go:87] duration metric: took 381.588084ms to configureAuth
	I0723 14:23:30.192799   36426 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:23:30.193019   36426 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:23:30.193086   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:30.195533   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.195915   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:30.195933   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.196103   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:30.196284   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:30.196456   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:30.196577   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:30.196747   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:30.196903   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:30.196925   36426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:25:01.143328   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:25:01.143370   36426 machine.go:97] duration metric: took 1m31.709486705s to provisionDockerMachine
	I0723 14:25:01.143387   36426 start.go:293] postStartSetup for "ha-533645" (driver="kvm2")
	I0723 14:25:01.143403   36426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:25:01.143441   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.143867   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:25:01.143905   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.147223   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.147677   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.147705   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.147860   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.148056   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.148238   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.148399   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:25:01.244627   36426 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:25:01.248621   36426 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:25:01.248644   36426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:25:01.248713   36426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:25:01.248820   36426 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:25:01.248833   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:25:01.248963   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:25:01.258082   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:25:01.281241   36426 start.go:296] duration metric: took 137.841266ms for postStartSetup
	I0723 14:25:01.281285   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.281586   36426 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0723 14:25:01.281615   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.284055   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.284384   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.284413   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.284511   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.284740   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.284904   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.285046   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	W0723 14:25:01.373164   36426 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0723 14:25:01.373187   36426 fix.go:56] duration metric: took 1m31.961073496s for fixHost
	I0723 14:25:01.373208   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.375639   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.376031   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.376054   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.376211   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.376394   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.376552   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.376700   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.376877   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:25:01.377038   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:25:01.377048   36426 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:25:01.490921   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744701.447974144
	
	I0723 14:25:01.490943   36426 fix.go:216] guest clock: 1721744701.447974144
	I0723 14:25:01.490950   36426 fix.go:229] Guest: 2024-07-23 14:25:01.447974144 +0000 UTC Remote: 2024-07-23 14:25:01.373194435 +0000 UTC m=+92.081508893 (delta=74.779709ms)
	I0723 14:25:01.490982   36426 fix.go:200] guest clock delta is within tolerance: 74.779709ms
	I0723 14:25:01.490989   36426 start.go:83] releasing machines lock for "ha-533645", held for 1m32.078885482s
	I0723 14:25:01.491012   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.491345   36426 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:25:01.493840   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.494205   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.494231   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.494412   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.494955   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.495133   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.495229   36426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:25:01.495272   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.495485   36426 ssh_runner.go:195] Run: cat /version.json
	I0723 14:25:01.495509   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.498052   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.498423   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.498448   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.498467   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.498571   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.498740   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.498903   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.498925   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.498932   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.499082   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:25:01.499118   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.499263   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.499402   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.499573   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:25:01.580027   36426 ssh_runner.go:195] Run: systemctl --version
	I0723 14:25:01.627332   36426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:25:01.786837   36426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:25:01.794241   36426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:25:01.794315   36426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:25:01.803923   36426 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 14:25:01.803955   36426 start.go:495] detecting cgroup driver to use...
	I0723 14:25:01.804020   36426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:25:01.819963   36426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:25:01.833556   36426 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:25:01.833618   36426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:25:01.846752   36426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:25:01.859580   36426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:25:02.010563   36426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:25:02.165654   36426 docker.go:233] disabling docker service ...
	I0723 14:25:02.165736   36426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:25:02.181906   36426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:25:02.195928   36426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:25:02.349290   36426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:25:02.491484   36426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:25:02.504880   36426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:25:02.522973   36426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:25:02.523026   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.532714   36426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:25:02.532771   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.542486   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.551880   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.561620   36426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:25:02.571353   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.581240   36426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.592343   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.602331   36426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:25:02.611220   36426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:25:02.619922   36426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:25:02.759081   36426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:25:07.147686   36426 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.388462954s)
	I0723 14:25:07.147835   36426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:25:07.147976   36426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:25:07.153781   36426 start.go:563] Will wait 60s for crictl version
	I0723 14:25:07.153839   36426 ssh_runner.go:195] Run: which crictl
	I0723 14:25:07.157383   36426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:25:07.192284   36426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:25:07.192366   36426 ssh_runner.go:195] Run: crio --version
	I0723 14:25:07.219211   36426 ssh_runner.go:195] Run: crio --version
	I0723 14:25:07.248598   36426 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:25:07.250007   36426 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:25:07.252509   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:07.252925   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:07.252964   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:07.253132   36426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:25:07.257316   36426 kubeadm.go:883] updating cluster {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:25:07.257446   36426 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:25:07.257486   36426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:25:07.298904   36426 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:25:07.298925   36426 crio.go:433] Images already preloaded, skipping extraction
	I0723 14:25:07.298984   36426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:25:07.335546   36426 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:25:07.335571   36426 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:25:07.335581   36426 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.30.3 crio true true} ...
	I0723 14:25:07.335685   36426 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:25:07.335749   36426 ssh_runner.go:195] Run: crio config
	I0723 14:25:07.379434   36426 cni.go:84] Creating CNI manager for ""
	I0723 14:25:07.379452   36426 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0723 14:25:07.379460   36426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:25:07.379482   36426 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-533645 NodeName:ha-533645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:25:07.379607   36426 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-533645"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:25:07.379625   36426 kube-vip.go:115] generating kube-vip config ...
	I0723 14:25:07.379663   36426 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:25:07.390474   36426 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:25:07.390586   36426 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:25:07.390636   36426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:25:07.399481   36426 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:25:07.399542   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0723 14:25:07.408008   36426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0723 14:25:07.423785   36426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:25:07.438801   36426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0723 14:25:07.453897   36426 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0723 14:25:07.469433   36426 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:25:07.474688   36426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:25:07.616984   36426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:25:07.631588   36426 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.103
	I0723 14:25:07.631608   36426 certs.go:194] generating shared ca certs ...
	I0723 14:25:07.631622   36426 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:25:07.631752   36426 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:25:07.631798   36426 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:25:07.631816   36426 certs.go:256] generating profile certs ...
	I0723 14:25:07.631888   36426 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:25:07.631912   36426 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5
	I0723 14:25:07.631927   36426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.182 192.168.39.127 192.168.39.254]
	I0723 14:25:07.791827   36426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5 ...
	I0723 14:25:07.791856   36426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5: {Name:mk101f11a0cc0130e7f3750253f2ca35c44f1ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:25:07.792021   36426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5 ...
	I0723 14:25:07.792033   36426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5: {Name:mk5debc47b8cbb99d950d8a1de5e6b1878e14a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:25:07.792100   36426 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:25:07.792261   36426 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:25:07.792394   36426 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:25:07.792410   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:25:07.792421   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:25:07.792432   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:25:07.792443   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:25:07.792453   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:25:07.792466   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:25:07.792479   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:25:07.792491   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:25:07.792543   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:25:07.792570   36426 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:25:07.792579   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:25:07.792599   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:25:07.792622   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:25:07.792644   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:25:07.792679   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:25:07.792705   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:07.792718   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:25:07.792730   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:25:07.793256   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:25:07.818226   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:25:07.841752   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:25:07.863790   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:25:07.885744   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 14:25:07.907091   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:25:07.928670   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:25:07.950365   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:25:07.971959   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:25:07.992972   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:25:08.014464   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:25:08.036768   36426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:25:08.052096   36426 ssh_runner.go:195] Run: openssl version
	I0723 14:25:08.057509   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:25:08.067310   36426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:08.071350   36426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:08.071392   36426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:08.076630   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:25:08.085343   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:25:08.095103   36426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:25:08.099276   36426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:25:08.099323   36426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:25:08.104602   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:25:08.113311   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:25:08.123405   36426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:25:08.127644   36426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:25:08.127688   36426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:25:08.132892   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:25:08.141700   36426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:25:08.145844   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 14:25:08.151149   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 14:25:08.156218   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 14:25:08.161215   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 14:25:08.166555   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 14:25:08.171611   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 14:25:08.176774   36426 kubeadm.go:392] StartCluster: {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:25:08.176926   36426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:25:08.176965   36426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:25:08.211100   36426 cri.go:89] found id: "cc130d1c92bae3c9e9791f1835f140213686af10adf6434010b55ac85f7293fe"
	I0723 14:25:08.211120   36426 cri.go:89] found id: "946446943bfa5a933cb67d27b02de7fccbd3772337ca82479985a55d61331803"
	I0723 14:25:08.211124   36426 cri.go:89] found id: "1db081ee945c36cc2ca4087ffb7e3e16ab8e74ae4d142c959677bde60737e5cd"
	I0723 14:25:08.211128   36426 cri.go:89] found id: "875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219"
	I0723 14:25:08.211130   36426 cri.go:89] found id: "c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46"
	I0723 14:25:08.211133   36426 cri.go:89] found id: "ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f"
	I0723 14:25:08.211136   36426 cri.go:89] found id: "204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493"
	I0723 14:25:08.211138   36426 cri.go:89] found id: "1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e"
	I0723 14:25:08.211140   36426 cri.go:89] found id: "a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71"
	I0723 14:25:08.211145   36426 cri.go:89] found id: "76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c"
	I0723 14:25:08.211150   36426 cri.go:89] found id: "081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e"
	I0723 14:25:08.211153   36426 cri.go:89] found id: "7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090"
	I0723 14:25:08.211155   36426 cri.go:89] found id: "e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3"
	I0723 14:25:08.211158   36426 cri.go:89] found id: ""
	I0723 14:25:08.211193   36426 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.241416501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744889241387539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8f88334-32bf-4b52-8fb0-1425b78f32eb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.242759762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad517746-c1b2-4b9b-8637-280e81da14a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.242843698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad517746-c1b2-4b9b-8637-280e81da14a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.243470055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad517746-c1b2-4b9b-8637-280e81da14a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.286544373Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a96c3be0-3c08-429c-8cac-3427e7428d4b name=/runtime.v1.RuntimeService/Version
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.286659112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a96c3be0-3c08-429c-8cac-3427e7428d4b name=/runtime.v1.RuntimeService/Version
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.288244395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94a9f616-5694-484f-8723-79a4e2f4df29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.288761924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744889288732188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94a9f616-5694-484f-8723-79a4e2f4df29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.289477169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdeb51e0-259a-418e-bfac-6914f3144541 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.289544355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdeb51e0-259a-418e-bfac-6914f3144541 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.290023312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdeb51e0-259a-418e-bfac-6914f3144541 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.338420875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59a43f70-0834-4412-af73-ba009de141d3 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.338531030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59a43f70-0834-4412-af73-ba009de141d3 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.339989858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b7ede74-6fd7-4ebd-9732-f40eeb8644ca name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.340825872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744889340788438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b7ede74-6fd7-4ebd-9732-f40eeb8644ca name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.341634164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe6b1822-8eca-4487-85de-1d8c221c9f71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.341744053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe6b1822-8eca-4487-85de-1d8c221c9f71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.342303476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe6b1822-8eca-4487-85de-1d8c221c9f71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.383625008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3382d04-6d5c-4fc5-aeae-081e142519a4 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.383732043Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3382d04-6d5c-4fc5-aeae-081e142519a4 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.384623492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7f98455-1581-4c2f-9711-30d6f04e92c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.385701008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721744889385663432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7f98455-1581-4c2f-9711-30d6f04e92c7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.386245182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be338f1f-2aa5-4a25-848a-0b04e2f16acd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.386321316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be338f1f-2aa5-4a25-848a-0b04e2f16acd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:28:09 ha-533645 crio[3705]: time="2024-07-23 14:28:09.386887688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be338f1f-2aa5-4a25-848a-0b04e2f16acd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fd52126455865       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   a40b6778e0792       storage-provisioner
	a56fd142b0b4e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   e0bca7366951a       kube-apiserver-ha-533645
	95b833ba6bc09       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Running             kube-controller-manager   2                   24d42cb054406       kube-controller-manager-ha-533645
	7d62bf4276e71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   a40b6778e0792       storage-provisioner
	3ca3498bbe375       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   e9e3414356d26       busybox-fc5497c4f-cd87c
	0b777505a2c50       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   6a75d2cb8f8e7       kube-vip-ha-533645
	46676ad486f94       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   b9b9a76367d45       kube-proxy-9wh4w
	8b31e93d3dd22       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   9549aad58e1e2       kindnet-99vkr
	e4daeb85c3ac6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   3317a972a9fb9       coredns-7db6d8ff4d-s6xzz
	1128fcbd5591d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   b21bb2d4573f4       coredns-7db6d8ff4d-nrvbf
	fa91958d57171       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   a7166d5441750       kube-scheduler-ha-533645
	bb05f37daa7f4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      3 minutes ago        Exited              kube-controller-manager   1                   24d42cb054406       kube-controller-manager-ha-533645
	2c6f6682e15ed       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      3 minutes ago        Exited              kube-apiserver            2                   e0bca7366951a       kube-apiserver-ha-533645
	063d43d9b55be       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   7e5a095202b51       etcd-ha-533645
	01ba0f9525e42       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   8e48b2467dce8       busybox-fc5497c4f-cd87c
	875e4306cadef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   67e32a92d8db3       coredns-7db6d8ff4d-nrvbf
	c272094e83046       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   a7feedf1d20d0       coredns-7db6d8ff4d-s6xzz
	204bd8ec5a070       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    14 minutes ago       Exited              kindnet-cni               0                   08c39cde805a7       kindnet-99vkr
	1d5b9787b76de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   8cb09524a9c81       kube-proxy-9wh4w
	081aaa8c6121c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   5d23d91d7b6c3       etcd-ha-533645
	7972ddd5dc32d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   17bfeff63e984       kube-scheduler-ha-533645
	
	
	==> coredns [1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d] <==
	Trace[174020751]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:25:28.975)
	Trace[174020751]: [10.001772994s] [10.001772994s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51674->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51674->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219] <==
	[INFO] 10.244.0.4:49583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187618s
	[INFO] 10.244.0.4:47929 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087678s
	[INFO] 10.244.2.2:38089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189381s
	[INFO] 10.244.2.2:42424 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002105089s
	[INFO] 10.244.2.2:44423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066747s
	[INFO] 10.244.1.2:32850 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770779s
	[INFO] 10.244.1.2:53620 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074588s
	[INFO] 10.244.1.2:33169 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009318s
	[INFO] 10.244.0.4:47876 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009475s
	[INFO] 10.244.2.2:42045 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092251s
	[INFO] 10.244.2.2:58530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137054s
	[INFO] 10.244.1.2:36698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167251s
	[INFO] 10.244.1.2:56144 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082378s
	[INFO] 10.244.1.2:37800 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138485s
	[INFO] 10.244.0.4:35800 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198717s
	[INFO] 10.244.0.4:55540 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113741s
	[INFO] 10.244.0.4:40041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000256677s
	[INFO] 10.244.1.2:51609 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132031s
	[INFO] 10.244.1.2:56610 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023971s
	[INFO] 10.244.1.2:42525 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084914s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46] <==
	[INFO] 10.244.2.2:36170 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001374503s
	[INFO] 10.244.2.2:32919 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148684s
	[INFO] 10.244.2.2:33222 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130497s
	[INFO] 10.244.1.2:41720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132072s
	[INFO] 10.244.1.2:46039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136478s
	[INFO] 10.244.1.2:42265 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246596s
	[INFO] 10.244.1.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106745s
	[INFO] 10.244.1.2:42065 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173598s
	[INFO] 10.244.0.4:49694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097989s
	[INFO] 10.244.0.4:55332 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105679s
	[INFO] 10.244.0.4:55778 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057634s
	[INFO] 10.244.2.2:46643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151446s
	[INFO] 10.244.2.2:47656 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125295s
	[INFO] 10.244.1.2:33099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116864s
	[INFO] 10.244.0.4:43829 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233901s
	[INFO] 10.244.2.2:39898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180683s
	[INFO] 10.244.2.2:53185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148942s
	[INFO] 10.244.2.2:36301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000319769s
	[INFO] 10.244.2.2:54739 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011416s
	[INFO] 10.244.1.2:40740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148117s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4daeb85c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-533645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_13_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:28:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:25:53 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:25:53 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:25:53 +0000   Tue, 23 Jul 2024 14:13:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:25:53 +0000   Tue, 23 Jul 2024 14:14:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-533645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 016f247620dd4139a26ce62f3129dde1
	  System UUID:                016f2476-20dd-4139-a26c-e62f3129dde1
	  Boot ID:                    218264a1-e12e-486d-a0c2-4ec59bc9cd30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cd87c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-nrvbf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-s6xzz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-533645                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-99vkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-533645             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-533645    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-9wh4w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-533645             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-533645                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m9s                   kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-533645 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Warning  ContainerGCFailed        3m33s (x2 over 4m33s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   RegisteredNode           2m6s                   node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	
	
	Name:               ha-533645-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_15_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:15:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:27:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-533645-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 024bddfd48eb471b960e0dab2d3cd45b
	  System UUID:                024bddfd-48eb-471b-960e-0dab2d3cd45b
	  Boot ID:                    f5b66f61-31e7-4590-a690-1a9245df56a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tlvlp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-533645-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-95sfh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-533645-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-533645-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-p25cg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-533645-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-533645-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m6s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-533645-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-533645-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-533645-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  NodeNotReady             9m31s                  node-controller  Node ha-533645-m02 status is now: NodeNotReady
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node ha-533645-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s (x7 over 2m40s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           30s                    node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	
	
	Name:               ha-533645-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_16_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:16:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:28:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:27:42 +0000   Tue, 23 Jul 2024 14:27:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:27:42 +0000   Tue, 23 Jul 2024 14:27:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:27:42 +0000   Tue, 23 Jul 2024 14:27:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:27:42 +0000   Tue, 23 Jul 2024 14:27:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-533645-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58ea8f3065de44aea0aac5ffb591660d
	  System UUID:                58ea8f30-65de-44ae-a0aa-c5ffb591660d
	  Boot ID:                    41f98921-9c85-44d4-bac5-44e1443cf4cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kq2ww                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-533645-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-99qsf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-533645-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-533645-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-xsk2w                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-533645-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-533645-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-533645-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal   RegisteredNode           2m6s               node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	  Normal   NodeNotReady             88s                node-controller  Node ha-533645-m03 status is now: NodeNotReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  57s (x2 over 57s)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x2 over 57s)  kubelet          Node ha-533645-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x2 over 57s)  kubelet          Node ha-533645-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 57s                kubelet          Node ha-533645-m03 has been rebooted, boot id: 41f98921-9c85-44d4-bac5-44e1443cf4cc
	  Normal   NodeReady                57s                kubelet          Node ha-533645-m03 status is now: NodeReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-533645-m03 event: Registered Node ha-533645-m03 in Controller
	
	
	Name:               ha-533645-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_17_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:17:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:28:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:28:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:28:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:28:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:28:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-533645-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d58ceb89e2492c9f4ada3b3365c263
	  System UUID:                c6d58ceb-89e2-492c-9f4a-da3b3365c263
	  Boot ID:                    7c9b0a32-693f-4200-8920-455f96c741ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-f4tkn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-nz528    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-533645-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-533645-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   RegisteredNode           2m6s               node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   NodeNotReady             88s                node-controller  Node ha-533645-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-533645-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-533645-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-533645-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-533645-m04 has been rebooted, boot id: 7c9b0a32-693f-4200-8920-455f96c741ac
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-533645-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.424464] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.065789] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058371] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.157255] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.139843] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.253665] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.906302] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.745369] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.058504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.271647] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.077951] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.844081] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.054308] kauditd_printk_skb: 34 callbacks suppressed
	[Jul23 14:15] kauditd_printk_skb: 24 callbacks suppressed
	[Jul23 14:25] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +0.151028] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.197525] systemd-fstab-generator[3651]: Ignoring "noauto" option for root device
	[  +0.146196] systemd-fstab-generator[3663]: Ignoring "noauto" option for root device
	[  +0.265504] systemd-fstab-generator[3691]: Ignoring "noauto" option for root device
	[  +4.858652] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.082505] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.370996] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.054970] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.413995] kauditd_printk_skb: 12 callbacks suppressed
	[ +17.846041] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2] <==
	{"level":"warn","ts":"2024-07-23T14:27:06.503106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"836b637e1db3e16e","from":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-23T14:27:06.555249Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"2ed9c9959a67d1c2","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:06.555376Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2ed9c9959a67d1c2","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:08.789258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:08.789453Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:10.558687Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"2ed9c9959a67d1c2","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:10.558777Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2ed9c9959a67d1c2","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:13.790084Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:13.790226Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:14.561043Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"2ed9c9959a67d1c2","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:14.56123Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2ed9c9959a67d1c2","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-23T14:27:17.105968Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:27:17.106212Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:27:17.107605Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:27:17.143473Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"836b637e1db3e16e","to":"2ed9c9959a67d1c2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-23T14:27:17.14363Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:27:17.14539Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"836b637e1db3e16e","to":"2ed9c9959a67d1c2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-23T14:27:17.145474Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"warn","ts":"2024-07-23T14:27:18.790909Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:18.79101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-23T14:27:26.736833Z","caller":"traceutil/trace.go:171","msg":"trace[1708522923] transaction","detail":"{read_only:false; response_revision:2550; number_of_response:1; }","duration":"141.808451ms","start":"2024-07-23T14:27:26.594762Z","end":"2024-07-23T14:27:26.73657Z","steps":["trace[1708522923] 'process raft request'  (duration: 141.727567ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:27:28.773379Z","caller":"traceutil/trace.go:171","msg":"trace[2043252536] linearizableReadLoop","detail":"{readStateIndex:2987; appliedIndex:2987; }","duration":"118.562988ms","start":"2024-07-23T14:27:28.654777Z","end":"2024-07-23T14:27:28.77334Z","steps":["trace[2043252536] 'read index received'  (duration: 118.557396ms)","trace[2043252536] 'applied index is now lower than readState.Index'  (duration: 4.362µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:27:28.773617Z","caller":"traceutil/trace.go:171","msg":"trace[1923459605] transaction","detail":"{read_only:false; response_revision:2561; number_of_response:1; }","duration":"159.941698ms","start":"2024-07-23T14:27:28.613659Z","end":"2024-07-23T14:27:28.773601Z","steps":["trace[1923459605] 'process raft request'  (duration: 159.818321ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:27:28.774012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.112974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-533645-m03\" ","response":"range_response_count:1 size:5803"}
	{"level":"info","ts":"2024-07-23T14:27:28.774197Z","caller":"traceutil/trace.go:171","msg":"trace[321309753] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-533645-m03; range_end:; response_count:1; response_revision:2561; }","duration":"119.42965ms","start":"2024-07-23T14:27:28.654749Z","end":"2024-07-23T14:27:28.774179Z","steps":["trace[321309753] 'agreement among raft nodes before linearized reading'  (duration: 119.018481ms)"],"step_count":1}
	
	
	==> etcd [081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e] <==
	{"level":"info","ts":"2024-07-23T14:23:30.339588Z","caller":"traceutil/trace.go:171","msg":"trace[1562796131] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; }","duration":"7.740614453s","start":"2024-07-23T14:23:22.598967Z","end":"2024-07-23T14:23:30.339582Z","steps":["trace[1562796131] 'agreement among raft nodes before linearized reading'  (duration: 7.740572136s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:23:30.339611Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:23:22.598964Z","time spent":"7.740640387s","remote":"127.0.0.1:34750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	2024/07/23 14:23:30 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-23T14:23:30.335111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:23:29.600387Z","time spent":"734.720796ms","remote":"127.0.0.1:35178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	2024/07/23 14:23:30 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-23T14:23:30.404539Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.103:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:23:30.40459Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.103:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T14:23:30.404651Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"836b637e1db3e16e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-23T14:23:30.404847Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.404908Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.404989Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405184Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405243Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405282Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405293Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405298Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405307Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405342Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405444Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405526Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405592Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405642Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.408544Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.103:2380"}
	{"level":"info","ts":"2024-07-23T14:23:30.408664Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.103:2380"}
	{"level":"info","ts":"2024-07-23T14:23:30.408687Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-533645","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.103:2380"],"advertise-client-urls":["https://192.168.39.103:2379"]}
	
	
	==> kernel <==
	 14:28:10 up 15 min,  0 users,  load average: 0.12, 0.33, 0.25
	Linux ha-533645 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493] <==
	I0723 14:22:55.722969       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:23:05.723271       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:23:05.723336       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:23:05.723570       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:23:05.723595       1 main.go:299] handling current node
	I0723 14:23:05.723610       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:23:05.723615       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:23:05.723678       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:23:05.723682       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:23:15.722832       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:23:15.722883       1 main.go:299] handling current node
	I0723 14:23:15.722927       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:23:15.722940       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:23:15.723227       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:23:15.723253       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:23:15.723359       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:23:15.723380       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:23:25.727722       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:23:25.727851       1 main.go:299] handling current node
	I0723 14:23:25.727884       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:23:25.727906       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:23:25.728257       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:23:25.728342       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:23:25.728449       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:23:25.728474       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb] <==
	I0723 14:27:38.140314       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:27:48.134740       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:27:48.134885       1 main.go:299] handling current node
	I0723 14:27:48.134922       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:27:48.134944       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:27:48.135311       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:27:48.135359       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:27:48.135496       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:27:48.135517       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:27:58.132449       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:27:58.132548       1 main.go:299] handling current node
	I0723 14:27:58.132606       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:27:58.132613       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:27:58.133082       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:27:58.133105       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:27:58.133297       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:27:58.133316       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:28:08.131088       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:28:08.131286       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:28:08.131496       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:28:08.131558       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:28:08.131724       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:28:08.131777       1 main.go:299] handling current node
	I0723 14:28:08.131821       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:28:08.131850       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f] <==
	I0723 14:25:08.714551       1 options.go:221] external host was not specified, using 192.168.39.103
	I0723 14:25:08.716290       1 server.go:148] Version: v1.30.3
	I0723 14:25:08.716334       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:25:09.157571       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0723 14:25:09.164435       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 14:25:09.179685       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0723 14:25:09.186572       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0723 14:25:09.186907       1 instance.go:299] Using reconciler: lease
	W0723 14:25:29.155607       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0723 14:25:29.156932       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0723 14:25:29.187968       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0723 14:25:29.188105       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491] <==
	I0723 14:25:50.637441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0723 14:25:50.637497       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0723 14:25:50.637595       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0723 14:25:50.714195       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0723 14:25:50.714263       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0723 14:25:50.714204       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0723 14:25:50.714245       1 shared_informer.go:320] Caches are synced for configmaps
	I0723 14:25:50.717096       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 14:25:50.717966       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 14:25:50.722880       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0723 14:25:50.729951       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.182]
	I0723 14:25:50.738065       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 14:25:50.738094       1 aggregator.go:165] initial CRD sync complete...
	I0723 14:25:50.738113       1 autoregister_controller.go:141] Starting autoregister controller
	I0723 14:25:50.738152       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 14:25:50.738157       1 cache.go:39] Caches are synced for autoregister controller
	I0723 14:25:50.757955       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 14:25:50.761187       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 14:25:50.761223       1 policy_source.go:224] refreshing policies
	I0723 14:25:50.796329       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 14:25:50.832093       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:25:50.840099       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0723 14:25:50.843751       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0723 14:25:51.622016       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0723 14:25:52.160871       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.127 192.168.39.182]
	
	
	==> kube-controller-manager [95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f] <==
	I0723 14:26:03.897248       1 shared_informer.go:320] Caches are synced for PV protection
	I0723 14:26:03.950961       1 shared_informer.go:320] Caches are synced for daemon sets
	I0723 14:26:04.016325       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 14:26:04.035886       1 shared_informer.go:320] Caches are synced for disruption
	I0723 14:26:04.035916       1 shared_informer.go:320] Caches are synced for stateful set
	I0723 14:26:04.040610       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 14:26:04.492711       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:26:04.534796       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:26:04.534900       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0723 14:26:06.398397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.989321ms"
	I0723 14:26:06.398601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.192µs"
	I0723 14:26:16.170798       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h9krx\": the object has been modified; please apply your changes to the latest version and try again"
	I0723 14:26:16.171379       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c6785928-a46e-422c-892f-7d7089b74c17", APIVersion:"v1", ResourceVersion:"299", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h9krx": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:26:16.188466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.502031ms"
	I0723 14:26:16.188656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.035µs"
	I0723 14:26:38.843070       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h9krx\": the object has been modified; please apply your changes to the latest version and try again"
	I0723 14:26:38.844023       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c6785928-a46e-422c-892f-7d7089b74c17", APIVersion:"v1", ResourceVersion:"299", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h9krx": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:26:38.873653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.358164ms"
	I0723 14:26:38.873841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.993µs"
	I0723 14:26:41.165351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.812841ms"
	I0723 14:26:41.165809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.956µs"
	I0723 14:27:13.234432       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.177µs"
	I0723 14:27:32.511996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.668912ms"
	I0723 14:27:32.512251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.049µs"
	I0723 14:28:01.544633       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-533645-m04"
	
	
	==> kube-controller-manager [bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54] <==
	I0723 14:25:09.201580       1 serving.go:380] Generated self-signed cert in-memory
	I0723 14:25:09.682342       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0723 14:25:09.682382       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:25:09.683970       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0723 14:25:09.684023       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0723 14:25:09.684318       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0723 14:25:09.684565       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0723 14:25:30.195156       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.103:8443/healthz\": dial tcp 192.168.39.103:8443: connect: connection refused"
	
	
	==> kube-proxy [1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e] <==
	E0723 14:22:23.865675       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:26.937077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:26.937683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:26.938655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:26.938738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:26.938841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:26.938871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:33.080842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:33.080900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:33.080970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:33.081003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:33.081059       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:33.081088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:42.298295       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:42.298430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:45.370298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:45.370359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:48.441994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:48.442178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:23:03.801775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:23:03.801895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:23:03.802041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:23:03.802150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:23:09.945175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:23:09.945851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e] <==
	I0723 14:25:19.100009       1 server_linux.go:69] "Using iptables proxy"
	E0723 14:25:22.041571       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:25.113680       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:28.185512       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:34.329710       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:43.544958       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0723 14:26:00.693936       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0723 14:26:00.751861       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:26:00.751961       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:26:00.752004       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:26:00.756930       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:26:00.758599       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:26:00.758979       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:26:00.760574       1 config.go:192] "Starting service config controller"
	I0723 14:26:00.760654       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:26:00.760713       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:26:00.760731       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:26:00.761478       1 config.go:319] "Starting node config controller"
	I0723 14:26:00.777252       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:26:00.861589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:26:00.862795       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:26:00.878928       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090] <==
	W0723 14:23:23.045772       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 14:23:23.045869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:23:23.112473       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:23:23.112549       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:23:23.119687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:23:23.119783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:23:23.231252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:23:23.231339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:23:23.388384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 14:23:23.388429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 14:23:23.390500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 14:23:23.390572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 14:23:24.101389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:23:24.101519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:23:24.289841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:24.289929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:24.757516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:24.757581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:25.235301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:25.235384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:25.364313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:25.364358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:25.821567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:23:25.821620       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:23:30.301802       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db] <==
	W0723 14:25:45.113993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:45.114060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:46.077859       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.103:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:46.077958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.103:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:46.140863       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:46.140964       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:46.574697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:46.574775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:47.092462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:47.092571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:47.521751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:47.521862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:48.267820       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:48.267904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:48.457289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:48.457425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:48.621000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:48.621213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:50.662618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:25:50.662718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:25:50.662805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:25:50.662842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:25:50.662930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:25:50.662962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:26:07.102543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:25:47 ha-533645 kubelet[1366]: E0723 14:25:47.230537    1366 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52ab05ba-6dfc-4cc6-9085-8632f5cd7a66)\"" pod="kube-system/storage-provisioner" podUID="52ab05ba-6dfc-4cc6-9085-8632f5cd7a66"
	Jul 23 14:25:48 ha-533645 kubelet[1366]: I0723 14:25:48.789036    1366 scope.go:117] "RemoveContainer" containerID="2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f"
	Jul 23 14:25:49 ha-533645 kubelet[1366]: E0723 14:25:49.688475    1366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-533645?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 23 14:25:49 ha-533645 kubelet[1366]: E0723 14:25:49.688461    1366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-533645.17e4dd7f0e54955f  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-533645,UID:5693e50c5ce4a113bda653dc5ed85d89,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-533645,},FirstTimestamp:2024-07-23 14:21:35.333381471 +0000 UTC m=+478.663722291,LastTimestamp:2024-07-23 14:21:35.333381471 +0000 UTC m=+478.663722291,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-533645,}"
	Jul 23 14:25:49 ha-533645 kubelet[1366]: I0723 14:25:49.688582    1366 status_manager.go:853] "Failed to get status for pod" podUID="6de7f3c8e278c087425628d1b79c1d22" pod="kube-system/kube-scheduler-ha-533645" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 23 14:25:52 ha-533645 kubelet[1366]: I0723 14:25:52.760546    1366 status_manager.go:853] "Failed to get status for pod" podUID="d9eb4982-e145-42cf-9a84-6013d7cdd3aa" pod="kube-system/kube-proxy-9wh4w" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wh4w\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 23 14:25:59 ha-533645 kubelet[1366]: I0723 14:25:59.788661    1366 scope.go:117] "RemoveContainer" containerID="7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2"
	Jul 23 14:25:59 ha-533645 kubelet[1366]: E0723 14:25:59.788864    1366 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52ab05ba-6dfc-4cc6-9085-8632f5cd7a66)\"" pod="kube-system/storage-provisioner" podUID="52ab05ba-6dfc-4cc6-9085-8632f5cd7a66"
	Jul 23 14:26:13 ha-533645 kubelet[1366]: I0723 14:26:13.789003    1366 scope.go:117] "RemoveContainer" containerID="7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2"
	Jul 23 14:26:13 ha-533645 kubelet[1366]: E0723 14:26:13.789272    1366 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52ab05ba-6dfc-4cc6-9085-8632f5cd7a66)\"" pod="kube-system/storage-provisioner" podUID="52ab05ba-6dfc-4cc6-9085-8632f5cd7a66"
	Jul 23 14:26:25 ha-533645 kubelet[1366]: I0723 14:26:25.862678    1366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-cd87c" podStartSLOduration=577.223812951 podStartE2EDuration="9m39.862613855s" podCreationTimestamp="2024-07-23 14:16:46 +0000 UTC" firstStartedPulling="2024-07-23 14:16:47.628258236 +0000 UTC m=+190.958599034" lastFinishedPulling="2024-07-23 14:16:50.267059139 +0000 UTC m=+193.597399938" observedRunningTime="2024-07-23 14:16:50.550678988 +0000 UTC m=+193.881019807" watchObservedRunningTime="2024-07-23 14:26:25.862613855 +0000 UTC m=+769.192954675"
	Jul 23 14:26:28 ha-533645 kubelet[1366]: I0723 14:26:28.788992    1366 scope.go:117] "RemoveContainer" containerID="7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2"
	Jul 23 14:26:36 ha-533645 kubelet[1366]: E0723 14:26:36.827771    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:26:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:26:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:26:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:26:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:26:53 ha-533645 kubelet[1366]: I0723 14:26:53.788949    1366 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-533645" podUID="f21f8827-c6f7-4767-b7f5-f23c385e93ae"
	Jul 23 14:26:53 ha-533645 kubelet[1366]: I0723 14:26:53.806103    1366 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-533645"
	Jul 23 14:26:56 ha-533645 kubelet[1366]: I0723 14:26:56.806939    1366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-533645" podStartSLOduration=3.80691302 podStartE2EDuration="3.80691302s" podCreationTimestamp="2024-07-23 14:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-23 14:26:56.805641776 +0000 UTC m=+800.135982595" watchObservedRunningTime="2024-07-23 14:26:56.80691302 +0000 UTC m=+800.137253839"
	Jul 23 14:27:36 ha-533645 kubelet[1366]: E0723 14:27:36.827248    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:27:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:27:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:27:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:27:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 14:28:08.923894   37872 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19319-11303/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-533645 -n ha-533645
helpers_test.go:261: (dbg) Run:  kubectl --context ha-533645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (403.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 stop -v=7 --alsologtostderr
E0723 14:29:49.699807   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 stop -v=7 --alsologtostderr: exit status 82 (2m0.477728016s)

                                                
                                                
-- stdout --
	* Stopping node "ha-533645-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:28:28.576174   38284 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:28:28.576476   38284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:28:28.576487   38284 out.go:304] Setting ErrFile to fd 2...
	I0723 14:28:28.576492   38284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:28:28.576669   38284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:28:28.576905   38284 out.go:298] Setting JSON to false
	I0723 14:28:28.576995   38284 mustload.go:65] Loading cluster: ha-533645
	I0723 14:28:28.577356   38284 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:28:28.577445   38284 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:28:28.577619   38284 mustload.go:65] Loading cluster: ha-533645
	I0723 14:28:28.577750   38284 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:28:28.577785   38284 stop.go:39] StopHost: ha-533645-m04
	I0723 14:28:28.578151   38284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:28:28.578197   38284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:28:28.592719   38284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0723 14:28:28.593150   38284 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:28:28.593655   38284 main.go:141] libmachine: Using API Version  1
	I0723 14:28:28.593678   38284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:28:28.594062   38284 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:28:28.596538   38284 out.go:177] * Stopping node "ha-533645-m04"  ...
	I0723 14:28:28.598013   38284 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0723 14:28:28.598036   38284 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:28:28.598286   38284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0723 14:28:28.598309   38284 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:28:28.601281   38284 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:28:28.601759   38284 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:27:55 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:28:28.601784   38284 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:28:28.601949   38284 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:28:28.602112   38284 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:28:28.602271   38284 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:28:28.602428   38284 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	I0723 14:28:28.688118   38284 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0723 14:28:28.739871   38284 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0723 14:28:28.792058   38284 main.go:141] libmachine: Stopping "ha-533645-m04"...
	I0723 14:28:28.792093   38284 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:28:28.793985   38284 main.go:141] libmachine: (ha-533645-m04) Calling .Stop
	I0723 14:28:28.798008   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 0/120
	I0723 14:28:29.799793   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 1/120
	I0723 14:28:30.801118   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 2/120
	I0723 14:28:31.802798   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 3/120
	I0723 14:28:32.804118   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 4/120
	I0723 14:28:33.806274   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 5/120
	I0723 14:28:34.807716   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 6/120
	I0723 14:28:35.809275   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 7/120
	I0723 14:28:36.811015   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 8/120
	I0723 14:28:37.812320   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 9/120
	I0723 14:28:38.814234   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 10/120
	I0723 14:28:39.815997   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 11/120
	I0723 14:28:40.817848   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 12/120
	I0723 14:28:41.819299   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 13/120
	I0723 14:28:42.820515   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 14/120
	I0723 14:28:43.822320   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 15/120
	I0723 14:28:44.823668   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 16/120
	I0723 14:28:45.825154   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 17/120
	I0723 14:28:46.826606   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 18/120
	I0723 14:28:47.828853   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 19/120
	I0723 14:28:48.830701   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 20/120
	I0723 14:28:49.832090   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 21/120
	I0723 14:28:50.833828   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 22/120
	I0723 14:28:51.835214   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 23/120
	I0723 14:28:52.837098   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 24/120
	I0723 14:28:53.838587   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 25/120
	I0723 14:28:54.841251   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 26/120
	I0723 14:28:55.842810   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 27/120
	I0723 14:28:56.845126   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 28/120
	I0723 14:28:57.846706   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 29/120
	I0723 14:28:58.848882   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 30/120
	I0723 14:28:59.850160   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 31/120
	I0723 14:29:00.851365   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 32/120
	I0723 14:29:01.853067   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 33/120
	I0723 14:29:02.855125   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 34/120
	I0723 14:29:03.857407   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 35/120
	I0723 14:29:04.858885   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 36/120
	I0723 14:29:05.861174   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 37/120
	I0723 14:29:06.862615   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 38/120
	I0723 14:29:07.864975   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 39/120
	I0723 14:29:08.866844   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 40/120
	I0723 14:29:09.868490   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 41/120
	I0723 14:29:10.869865   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 42/120
	I0723 14:29:11.872050   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 43/120
	I0723 14:29:12.873705   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 44/120
	I0723 14:29:13.875874   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 45/120
	I0723 14:29:14.877343   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 46/120
	I0723 14:29:15.879416   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 47/120
	I0723 14:29:16.881623   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 48/120
	I0723 14:29:17.883530   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 49/120
	I0723 14:29:18.886028   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 50/120
	I0723 14:29:19.887903   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 51/120
	I0723 14:29:20.889422   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 52/120
	I0723 14:29:21.890977   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 53/120
	I0723 14:29:22.893100   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 54/120
	I0723 14:29:23.895139   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 55/120
	I0723 14:29:24.896615   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 56/120
	I0723 14:29:25.898593   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 57/120
	I0723 14:29:26.899975   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 58/120
	I0723 14:29:27.901259   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 59/120
	I0723 14:29:28.903164   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 60/120
	I0723 14:29:29.905246   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 61/120
	I0723 14:29:30.906829   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 62/120
	I0723 14:29:31.908345   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 63/120
	I0723 14:29:32.909898   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 64/120
	I0723 14:29:33.912071   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 65/120
	I0723 14:29:34.913695   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 66/120
	I0723 14:29:35.915130   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 67/120
	I0723 14:29:36.917030   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 68/120
	I0723 14:29:37.918350   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 69/120
	I0723 14:29:38.920988   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 70/120
	I0723 14:29:39.922397   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 71/120
	I0723 14:29:40.923656   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 72/120
	I0723 14:29:41.925327   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 73/120
	I0723 14:29:42.927219   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 74/120
	I0723 14:29:43.928812   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 75/120
	I0723 14:29:44.930188   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 76/120
	I0723 14:29:45.931615   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 77/120
	I0723 14:29:46.933021   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 78/120
	I0723 14:29:47.934560   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 79/120
	I0723 14:29:48.936541   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 80/120
	I0723 14:29:49.938910   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 81/120
	I0723 14:29:50.940436   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 82/120
	I0723 14:29:51.942442   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 83/120
	I0723 14:29:52.943941   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 84/120
	I0723 14:29:53.945739   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 85/120
	I0723 14:29:54.947985   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 86/120
	I0723 14:29:55.949596   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 87/120
	I0723 14:29:56.950977   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 88/120
	I0723 14:29:57.952342   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 89/120
	I0723 14:29:58.954580   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 90/120
	I0723 14:29:59.957055   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 91/120
	I0723 14:30:00.958584   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 92/120
	I0723 14:30:01.961045   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 93/120
	I0723 14:30:02.962489   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 94/120
	I0723 14:30:03.964642   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 95/120
	I0723 14:30:04.965957   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 96/120
	I0723 14:30:05.967324   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 97/120
	I0723 14:30:06.968982   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 98/120
	I0723 14:30:07.971421   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 99/120
	I0723 14:30:08.972784   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 100/120
	I0723 14:30:09.974125   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 101/120
	I0723 14:30:10.975730   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 102/120
	I0723 14:30:11.977387   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 103/120
	I0723 14:30:12.978846   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 104/120
	I0723 14:30:13.980719   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 105/120
	I0723 14:30:14.982372   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 106/120
	I0723 14:30:15.983725   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 107/120
	I0723 14:30:16.985099   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 108/120
	I0723 14:30:17.986763   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 109/120
	I0723 14:30:18.989110   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 110/120
	I0723 14:30:19.990661   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 111/120
	I0723 14:30:20.992899   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 112/120
	I0723 14:30:21.994319   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 113/120
	I0723 14:30:22.995841   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 114/120
	I0723 14:30:23.997985   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 115/120
	I0723 14:30:24.999166   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 116/120
	I0723 14:30:26.000683   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 117/120
	I0723 14:30:27.002101   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 118/120
	I0723 14:30:28.003775   38284 main.go:141] libmachine: (ha-533645-m04) Waiting for machine to stop 119/120
	I0723 14:30:29.004570   38284 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0723 14:30:29.004621   38284 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0723 14:30:29.006597   38284 out.go:177] 
	W0723 14:30:29.008090   38284 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0723 14:30:29.008116   38284 out.go:239] * 
	* 
	W0723 14:30:29.010479   38284 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 14:30:29.011971   38284 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-533645 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr: exit status 3 (18.981933691s)

                                                
                                                
-- stdout --
	ha-533645
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533645-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:30:29.056128   38715 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:30:29.056473   38715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:30:29.056492   38715 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:29.056502   38715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:30:29.056953   38715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:30:29.057342   38715 out.go:298] Setting JSON to false
	I0723 14:30:29.057432   38715 notify.go:220] Checking for updates...
	I0723 14:30:29.057443   38715 mustload.go:65] Loading cluster: ha-533645
	I0723 14:30:29.058017   38715 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:30:29.058038   38715 status.go:255] checking status of ha-533645 ...
	I0723 14:30:29.058451   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.058503   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.084397   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I0723 14:30:29.084841   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.085361   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.085390   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.085819   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.086045   38715 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:30:29.087827   38715 status.go:330] ha-533645 host status = "Running" (err=<nil>)
	I0723 14:30:29.087845   38715 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:30:29.088215   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.088257   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.102966   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0723 14:30:29.103359   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.103782   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.103799   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.104110   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.104285   38715 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:30:29.107698   38715 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:30:29.108131   38715 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:30:29.108149   38715 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:30:29.108322   38715 host.go:66] Checking if "ha-533645" exists ...
	I0723 14:30:29.108620   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.108671   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.124049   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I0723 14:30:29.124449   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.124860   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.124878   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.125164   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.125355   38715 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:30:29.125558   38715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:30:29.125582   38715 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:30:29.128434   38715 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:30:29.128865   38715 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:30:29.128898   38715 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:30:29.129204   38715 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:30:29.129370   38715 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:30:29.129542   38715 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:30:29.129686   38715 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:30:29.220044   38715 ssh_runner.go:195] Run: systemctl --version
	I0723 14:30:29.227281   38715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:30:29.244129   38715 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:30:29.244155   38715 api_server.go:166] Checking apiserver status ...
	I0723 14:30:29.244202   38715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:30:29.264014   38715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5055/cgroup
	W0723 14:30:29.275358   38715 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5055/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:30:29.275412   38715 ssh_runner.go:195] Run: ls
	I0723 14:30:29.280132   38715 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:30:29.284516   38715 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:30:29.284544   38715 status.go:422] ha-533645 apiserver status = Running (err=<nil>)
	I0723 14:30:29.284555   38715 status.go:257] ha-533645 status: &{Name:ha-533645 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:30:29.284578   38715 status.go:255] checking status of ha-533645-m02 ...
	I0723 14:30:29.284859   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.284891   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.299676   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I0723 14:30:29.300058   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.300483   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.300506   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.300854   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.301034   38715 main.go:141] libmachine: (ha-533645-m02) Calling .GetState
	I0723 14:30:29.302732   38715 status.go:330] ha-533645-m02 host status = "Running" (err=<nil>)
	I0723 14:30:29.302748   38715 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:30:29.303022   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.303050   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.318021   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0723 14:30:29.318466   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.318928   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.318945   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.319210   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.319371   38715 main.go:141] libmachine: (ha-533645-m02) Calling .GetIP
	I0723 14:30:29.322036   38715 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:30:29.322473   38715 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:25:18 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:30:29.322506   38715 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:30:29.322643   38715 host.go:66] Checking if "ha-533645-m02" exists ...
	I0723 14:30:29.322936   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.322976   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.337168   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45295
	I0723 14:30:29.337560   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.338036   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.338060   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.338344   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.338567   38715 main.go:141] libmachine: (ha-533645-m02) Calling .DriverName
	I0723 14:30:29.338726   38715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:30:29.338745   38715 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHHostname
	I0723 14:30:29.341634   38715 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:30:29.342022   38715 main.go:141] libmachine: (ha-533645-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:97:d5", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:25:18 +0000 UTC Type:0 Mac:52:54:00:a0:97:d5 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-533645-m02 Clientid:01:52:54:00:a0:97:d5}
	I0723 14:30:29.342053   38715 main.go:141] libmachine: (ha-533645-m02) DBG | domain ha-533645-m02 has defined IP address 192.168.39.182 and MAC address 52:54:00:a0:97:d5 in network mk-ha-533645
	I0723 14:30:29.342161   38715 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHPort
	I0723 14:30:29.342317   38715 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHKeyPath
	I0723 14:30:29.342471   38715 main.go:141] libmachine: (ha-533645-m02) Calling .GetSSHUsername
	I0723 14:30:29.342662   38715 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m02/id_rsa Username:docker}
	I0723 14:30:29.422256   38715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:30:29.440074   38715 kubeconfig.go:125] found "ha-533645" server: "https://192.168.39.254:8443"
	I0723 14:30:29.440103   38715 api_server.go:166] Checking apiserver status ...
	I0723 14:30:29.440139   38715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:30:29.455942   38715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1417/cgroup
	W0723 14:30:29.465644   38715 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1417/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:30:29.465700   38715 ssh_runner.go:195] Run: ls
	I0723 14:30:29.470092   38715 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0723 14:30:29.474482   38715 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0723 14:30:29.474504   38715 status.go:422] ha-533645-m02 apiserver status = Running (err=<nil>)
	I0723 14:30:29.474513   38715 status.go:257] ha-533645-m02 status: &{Name:ha-533645-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:30:29.474529   38715 status.go:255] checking status of ha-533645-m04 ...
	I0723 14:30:29.474807   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.474837   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.489726   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I0723 14:30:29.490135   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.490633   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.490655   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.490968   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.491183   38715 main.go:141] libmachine: (ha-533645-m04) Calling .GetState
	I0723 14:30:29.492820   38715 status.go:330] ha-533645-m04 host status = "Running" (err=<nil>)
	I0723 14:30:29.492838   38715 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:30:29.493098   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.493143   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.507585   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39081
	I0723 14:30:29.507945   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.508413   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.508444   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.508796   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.508984   38715 main.go:141] libmachine: (ha-533645-m04) Calling .GetIP
	I0723 14:30:29.511584   38715 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:30:29.511979   38715 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:27:55 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:30:29.512015   38715 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:30:29.512139   38715 host.go:66] Checking if "ha-533645-m04" exists ...
	I0723 14:30:29.512432   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:30:29.512465   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:30:29.527187   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0723 14:30:29.527602   38715 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:30:29.528047   38715 main.go:141] libmachine: Using API Version  1
	I0723 14:30:29.528065   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:30:29.528347   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:30:29.528529   38715 main.go:141] libmachine: (ha-533645-m04) Calling .DriverName
	I0723 14:30:29.528717   38715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:30:29.528738   38715 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHHostname
	I0723 14:30:29.531424   38715 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:30:29.531814   38715 main.go:141] libmachine: (ha-533645-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:09:47", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:27:55 +0000 UTC Type:0 Mac:52:54:00:68:09:47 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-533645-m04 Clientid:01:52:54:00:68:09:47}
	I0723 14:30:29.531844   38715 main.go:141] libmachine: (ha-533645-m04) DBG | domain ha-533645-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:09:47 in network mk-ha-533645
	I0723 14:30:29.531955   38715 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHPort
	I0723 14:30:29.532125   38715 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHKeyPath
	I0723 14:30:29.532287   38715 main.go:141] libmachine: (ha-533645-m04) Calling .GetSSHUsername
	I0723 14:30:29.532462   38715 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645-m04/id_rsa Username:docker}
	W0723 14:30:47.994589   38715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.162:22: connect: no route to host
	W0723 14:30:47.994670   38715 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0723 14:30:47.994706   38715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	I0723 14:30:47.994714   38715 status.go:257] ha-533645-m04 status: &{Name:ha-533645-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0723 14:30:47.994731   38715 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-533645 -n ha-533645
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-533645 logs -n 25: (1.687367229s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m04 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp testdata/cp-test.txt                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645:/home/docker/cp-test_ha-533645-m04_ha-533645.txt                      |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645 sudo cat                                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645.txt                                |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m02:/home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m02 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m03:/home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n                                                                | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | ha-533645-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-533645 ssh -n ha-533645-m03 sudo cat                                         | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC | 23 Jul 24 14:18 UTC |
	|         | /home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-533645 node stop m02 -v=7                                                    | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-533645 node start m02 -v=7                                                   | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-533645 -v=7                                                          | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-533645 -v=7                                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-533645 --wait=true -v=7                                                   | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:23 UTC | 23 Jul 24 14:28 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-533645                                                               | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:28 UTC |                     |
	| node    | ha-533645 node delete m03 -v=7                                                  | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:28 UTC | 23 Jul 24 14:28 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-533645 stop -v=7                                                             | ha-533645 | jenkins | v1.33.1 | 23 Jul 24 14:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:23:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:23:29.324808   36426 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:23:29.324928   36426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:23:29.324937   36426 out.go:304] Setting ErrFile to fd 2...
	I0723 14:23:29.324941   36426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:23:29.325124   36426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:23:29.325691   36426 out.go:298] Setting JSON to false
	I0723 14:23:29.326715   36426 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3955,"bootTime":1721740654,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:23:29.326779   36426 start.go:139] virtualization: kvm guest
	I0723 14:23:29.328827   36426 out.go:177] * [ha-533645] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 14:23:29.330412   36426 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:23:29.330472   36426 notify.go:220] Checking for updates...
	I0723 14:23:29.332869   36426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:23:29.334190   36426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:23:29.335786   36426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:23:29.337305   36426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:23:29.338672   36426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:23:29.340265   36426 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:23:29.340399   36426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:23:29.340874   36426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:23:29.340919   36426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:23:29.355845   36426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0723 14:23:29.356270   36426 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:23:29.356909   36426 main.go:141] libmachine: Using API Version  1
	I0723 14:23:29.356951   36426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:23:29.357262   36426 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:23:29.357443   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:23:29.391490   36426 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 14:23:29.392838   36426 start.go:297] selected driver: kvm2
	I0723 14:23:29.392855   36426 start.go:901] validating driver "kvm2" against &{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:23:29.392994   36426 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:23:29.393365   36426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:23:29.393446   36426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 14:23:29.407999   36426 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 14:23:29.408642   36426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:23:29.408671   36426 cni.go:84] Creating CNI manager for ""
	I0723 14:23:29.408678   36426 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0723 14:23:29.408734   36426 start.go:340] cluster config:
	{Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:23:29.408850   36426 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:23:29.410466   36426 out.go:177] * Starting "ha-533645" primary control-plane node in "ha-533645" cluster
	I0723 14:23:29.411642   36426 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:23:29.411677   36426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 14:23:29.411683   36426 cache.go:56] Caching tarball of preloaded images
	I0723 14:23:29.411748   36426 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:23:29.411758   36426 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:23:29.411882   36426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/config.json ...
	I0723 14:23:29.412058   36426 start.go:360] acquireMachinesLock for ha-533645: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:23:29.412095   36426 start.go:364] duration metric: took 20.583µs to acquireMachinesLock for "ha-533645"
	I0723 14:23:29.412107   36426 start.go:96] Skipping create...Using existing machine configuration
	I0723 14:23:29.412114   36426 fix.go:54] fixHost starting: 
	I0723 14:23:29.412355   36426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:23:29.412385   36426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:23:29.427394   36426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0723 14:23:29.427881   36426 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:23:29.428390   36426 main.go:141] libmachine: Using API Version  1
	I0723 14:23:29.428411   36426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:23:29.428807   36426 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:23:29.429009   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:23:29.429161   36426 main.go:141] libmachine: (ha-533645) Calling .GetState
	I0723 14:23:29.430771   36426 fix.go:112] recreateIfNeeded on ha-533645: state=Running err=<nil>
	W0723 14:23:29.430788   36426 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 14:23:29.432622   36426 out.go:177] * Updating the running kvm2 "ha-533645" VM ...
	I0723 14:23:29.433868   36426 machine.go:94] provisionDockerMachine start ...
	I0723 14:23:29.433889   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:23:29.434084   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.436634   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.437024   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.437055   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.437192   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:29.437367   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.437527   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.437672   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:29.437898   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:29.438063   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:29.438072   36426 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 14:23:29.555273   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645
	
	I0723 14:23:29.555297   36426 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:23:29.555533   36426 buildroot.go:166] provisioning hostname "ha-533645"
	I0723 14:23:29.555549   36426 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:23:29.555746   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.558041   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.558428   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.558447   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.558593   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:29.558840   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.559009   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.559150   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:29.559320   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:29.559477   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:29.559489   36426 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-533645 && echo "ha-533645" | sudo tee /etc/hostname
	I0723 14:23:29.692003   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-533645
	
	I0723 14:23:29.692032   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.694666   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.695010   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.695039   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.695199   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:29.695404   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.695617   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:29.695810   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:29.696039   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:29.696235   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:29.696257   36426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-533645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-533645/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-533645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:23:29.811069   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:23:29.811106   36426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:23:29.811142   36426 buildroot.go:174] setting up certificates
	I0723 14:23:29.811154   36426 provision.go:84] configureAuth start
	I0723 14:23:29.811166   36426 main.go:141] libmachine: (ha-533645) Calling .GetMachineName
	I0723 14:23:29.811433   36426 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:23:29.814075   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.814470   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.814491   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.814637   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:29.817096   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.817551   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:29.817583   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:29.817648   36426 provision.go:143] copyHostCerts
	I0723 14:23:29.817692   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:23:29.817725   36426 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:23:29.817734   36426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:23:29.817801   36426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:23:29.817872   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:23:29.817894   36426 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:23:29.817901   36426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:23:29.817924   36426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:23:29.817961   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:23:29.817977   36426 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:23:29.817983   36426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:23:29.818002   36426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:23:29.818044   36426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.ha-533645 san=[127.0.0.1 192.168.39.103 ha-533645 localhost minikube]
	I0723 14:23:30.016580   36426 provision.go:177] copyRemoteCerts
	I0723 14:23:30.016645   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:23:30.016667   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:30.019382   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.019744   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:30.019786   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.019937   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:30.020137   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:30.020334   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:30.020480   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:23:30.111015   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:23:30.111100   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:23:30.139839   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:23:30.139932   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0723 14:23:30.166800   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:23:30.166865   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 14:23:30.192758   36426 provision.go:87] duration metric: took 381.588084ms to configureAuth
	I0723 14:23:30.192799   36426 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:23:30.193019   36426 config.go:182] Loaded profile config "ha-533645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:23:30.193086   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:23:30.195533   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.195915   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:23:30.195933   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:23:30.196103   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:23:30.196284   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:30.196456   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:23:30.196577   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:23:30.196747   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:23:30.196903   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:23:30.196925   36426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:25:01.143328   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:25:01.143370   36426 machine.go:97] duration metric: took 1m31.709486705s to provisionDockerMachine
	I0723 14:25:01.143387   36426 start.go:293] postStartSetup for "ha-533645" (driver="kvm2")
	I0723 14:25:01.143403   36426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:25:01.143441   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.143867   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:25:01.143905   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.147223   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.147677   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.147705   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.147860   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.148056   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.148238   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.148399   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:25:01.244627   36426 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:25:01.248621   36426 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:25:01.248644   36426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:25:01.248713   36426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:25:01.248820   36426 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:25:01.248833   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:25:01.248963   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:25:01.258082   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:25:01.281241   36426 start.go:296] duration metric: took 137.841266ms for postStartSetup
	I0723 14:25:01.281285   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.281586   36426 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0723 14:25:01.281615   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.284055   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.284384   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.284413   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.284511   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.284740   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.284904   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.285046   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	W0723 14:25:01.373164   36426 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0723 14:25:01.373187   36426 fix.go:56] duration metric: took 1m31.961073496s for fixHost
	I0723 14:25:01.373208   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.375639   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.376031   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.376054   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.376211   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.376394   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.376552   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.376700   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.376877   36426 main.go:141] libmachine: Using SSH client type: native
	I0723 14:25:01.377038   36426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0723 14:25:01.377048   36426 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:25:01.490921   36426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721744701.447974144
	
	I0723 14:25:01.490943   36426 fix.go:216] guest clock: 1721744701.447974144
	I0723 14:25:01.490950   36426 fix.go:229] Guest: 2024-07-23 14:25:01.447974144 +0000 UTC Remote: 2024-07-23 14:25:01.373194435 +0000 UTC m=+92.081508893 (delta=74.779709ms)
	I0723 14:25:01.490982   36426 fix.go:200] guest clock delta is within tolerance: 74.779709ms
	I0723 14:25:01.490989   36426 start.go:83] releasing machines lock for "ha-533645", held for 1m32.078885482s
	I0723 14:25:01.491012   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.491345   36426 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:25:01.493840   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.494205   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.494231   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.494412   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.494955   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.495133   36426 main.go:141] libmachine: (ha-533645) Calling .DriverName
	I0723 14:25:01.495229   36426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:25:01.495272   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.495485   36426 ssh_runner.go:195] Run: cat /version.json
	I0723 14:25:01.495509   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHHostname
	I0723 14:25:01.498052   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.498423   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.498448   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.498467   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.498571   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.498740   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.498903   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:01.498925   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.498932   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:01.499082   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:25:01.499118   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHPort
	I0723 14:25:01.499263   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHKeyPath
	I0723 14:25:01.499402   36426 main.go:141] libmachine: (ha-533645) Calling .GetSSHUsername
	I0723 14:25:01.499573   36426 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/ha-533645/id_rsa Username:docker}
	I0723 14:25:01.580027   36426 ssh_runner.go:195] Run: systemctl --version
	I0723 14:25:01.627332   36426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:25:01.786837   36426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 14:25:01.794241   36426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:25:01.794315   36426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:25:01.803923   36426 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 14:25:01.803955   36426 start.go:495] detecting cgroup driver to use...
	I0723 14:25:01.804020   36426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:25:01.819963   36426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:25:01.833556   36426 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:25:01.833618   36426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:25:01.846752   36426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:25:01.859580   36426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:25:02.010563   36426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:25:02.165654   36426 docker.go:233] disabling docker service ...
	I0723 14:25:02.165736   36426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:25:02.181906   36426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:25:02.195928   36426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:25:02.349290   36426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:25:02.491484   36426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:25:02.504880   36426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:25:02.522973   36426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:25:02.523026   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.532714   36426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:25:02.532771   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.542486   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.551880   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.561620   36426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:25:02.571353   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.581240   36426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.592343   36426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:25:02.602331   36426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:25:02.611220   36426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:25:02.619922   36426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:25:02.759081   36426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:25:07.147686   36426 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.388462954s)
	I0723 14:25:07.147835   36426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:25:07.147976   36426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:25:07.153781   36426 start.go:563] Will wait 60s for crictl version
	I0723 14:25:07.153839   36426 ssh_runner.go:195] Run: which crictl
	I0723 14:25:07.157383   36426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:25:07.192284   36426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:25:07.192366   36426 ssh_runner.go:195] Run: crio --version
	I0723 14:25:07.219211   36426 ssh_runner.go:195] Run: crio --version
	I0723 14:25:07.248598   36426 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:25:07.250007   36426 main.go:141] libmachine: (ha-533645) Calling .GetIP
	I0723 14:25:07.252509   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:07.252925   36426 main.go:141] libmachine: (ha-533645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:b1:de", ip: ""} in network mk-ha-533645: {Iface:virbr1 ExpiryTime:2024-07-23 15:13:12 +0000 UTC Type:0 Mac:52:54:00:a6:b1:de Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-533645 Clientid:01:52:54:00:a6:b1:de}
	I0723 14:25:07.252964   36426 main.go:141] libmachine: (ha-533645) DBG | domain ha-533645 has defined IP address 192.168.39.103 and MAC address 52:54:00:a6:b1:de in network mk-ha-533645
	I0723 14:25:07.253132   36426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:25:07.257316   36426 kubeadm.go:883] updating cluster {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:25:07.257446   36426 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:25:07.257486   36426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:25:07.298904   36426 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:25:07.298925   36426 crio.go:433] Images already preloaded, skipping extraction
	I0723 14:25:07.298984   36426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:25:07.335546   36426 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:25:07.335571   36426 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:25:07.335581   36426 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.30.3 crio true true} ...
	I0723 14:25:07.335685   36426 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-533645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:25:07.335749   36426 ssh_runner.go:195] Run: crio config
	I0723 14:25:07.379434   36426 cni.go:84] Creating CNI manager for ""
	I0723 14:25:07.379452   36426 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0723 14:25:07.379460   36426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:25:07.379482   36426 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-533645 NodeName:ha-533645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:25:07.379607   36426 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-533645"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:25:07.379625   36426 kube-vip.go:115] generating kube-vip config ...
	I0723 14:25:07.379663   36426 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0723 14:25:07.390474   36426 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0723 14:25:07.390586   36426 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0723 14:25:07.390636   36426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:25:07.399481   36426 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:25:07.399542   36426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0723 14:25:07.408008   36426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0723 14:25:07.423785   36426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:25:07.438801   36426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0723 14:25:07.453897   36426 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0723 14:25:07.469433   36426 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0723 14:25:07.474688   36426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:25:07.616984   36426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:25:07.631588   36426 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645 for IP: 192.168.39.103
	I0723 14:25:07.631608   36426 certs.go:194] generating shared ca certs ...
	I0723 14:25:07.631622   36426 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:25:07.631752   36426 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:25:07.631798   36426 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:25:07.631816   36426 certs.go:256] generating profile certs ...
	I0723 14:25:07.631888   36426 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/client.key
	I0723 14:25:07.631912   36426 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5
	I0723 14:25:07.631927   36426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103 192.168.39.182 192.168.39.127 192.168.39.254]
	I0723 14:25:07.791827   36426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5 ...
	I0723 14:25:07.791856   36426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5: {Name:mk101f11a0cc0130e7f3750253f2ca35c44f1ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:25:07.792021   36426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5 ...
	I0723 14:25:07.792033   36426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5: {Name:mk5debc47b8cbb99d950d8a1de5e6b1878e14a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:25:07.792100   36426 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt.95ac2cf5 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt
	I0723 14:25:07.792261   36426 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key.95ac2cf5 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key
	I0723 14:25:07.792394   36426 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key
	I0723 14:25:07.792410   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:25:07.792421   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:25:07.792432   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:25:07.792443   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:25:07.792453   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:25:07.792466   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:25:07.792479   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:25:07.792491   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:25:07.792543   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:25:07.792570   36426 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:25:07.792579   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:25:07.792599   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:25:07.792622   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:25:07.792644   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:25:07.792679   36426 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:25:07.792705   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:07.792718   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:25:07.792730   36426 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:25:07.793256   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:25:07.818226   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:25:07.841752   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:25:07.863790   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:25:07.885744   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 14:25:07.907091   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:25:07.928670   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:25:07.950365   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/ha-533645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 14:25:07.971959   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:25:07.992972   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:25:08.014464   36426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:25:08.036768   36426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:25:08.052096   36426 ssh_runner.go:195] Run: openssl version
	I0723 14:25:08.057509   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:25:08.067310   36426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:08.071350   36426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:08.071392   36426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:25:08.076630   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:25:08.085343   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:25:08.095103   36426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:25:08.099276   36426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:25:08.099323   36426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:25:08.104602   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:25:08.113311   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:25:08.123405   36426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:25:08.127644   36426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:25:08.127688   36426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:25:08.132892   36426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:25:08.141700   36426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:25:08.145844   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 14:25:08.151149   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 14:25:08.156218   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 14:25:08.161215   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 14:25:08.166555   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 14:25:08.171611   36426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 14:25:08.176774   36426 kubeadm.go:392] StartCluster: {Name:ha-533645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-533645 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:25:08.176926   36426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:25:08.176965   36426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:25:08.211100   36426 cri.go:89] found id: "cc130d1c92bae3c9e9791f1835f140213686af10adf6434010b55ac85f7293fe"
	I0723 14:25:08.211120   36426 cri.go:89] found id: "946446943bfa5a933cb67d27b02de7fccbd3772337ca82479985a55d61331803"
	I0723 14:25:08.211124   36426 cri.go:89] found id: "1db081ee945c36cc2ca4087ffb7e3e16ab8e74ae4d142c959677bde60737e5cd"
	I0723 14:25:08.211128   36426 cri.go:89] found id: "875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219"
	I0723 14:25:08.211130   36426 cri.go:89] found id: "c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46"
	I0723 14:25:08.211133   36426 cri.go:89] found id: "ee98d1058de99c09e1397d14de2b44ecadb981066604cac05780c2c6380aed9f"
	I0723 14:25:08.211136   36426 cri.go:89] found id: "204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493"
	I0723 14:25:08.211138   36426 cri.go:89] found id: "1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e"
	I0723 14:25:08.211140   36426 cri.go:89] found id: "a208ea67ea379837bfd69dc6775ffa1b202c66a7a90e072d657c30b5d9ba1a71"
	I0723 14:25:08.211145   36426 cri.go:89] found id: "76bcad60035c6453da123c546b8d151ae4bb59f949de157578fab6dc7013cd7c"
	I0723 14:25:08.211150   36426 cri.go:89] found id: "081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e"
	I0723 14:25:08.211153   36426 cri.go:89] found id: "7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090"
	I0723 14:25:08.211155   36426 cri.go:89] found id: "e28c0ebf351e0b782b96165381aa58b568a2a87fad684d4f4c077d8b6582c1f3"
	I0723 14:25:08.211158   36426 cri.go:89] found id: ""
	I0723 14:25:08.211193   36426 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.639313033Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9317f497-45e0-4eeb-a938-b59d912d034d name=/runtime.v1.RuntimeService/Version
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.640780386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da9e5222-54f5-4a97-9e56-932166b7f818 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.641821267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721745048641789693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da9e5222-54f5-4a97-9e56-932166b7f818 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.642645644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e03889dd-c01e-4b6a-a36b-3a9ee24ff6d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.642742220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e03889dd-c01e-4b6a-a36b-3a9ee24ff6d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.643397212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e03889dd-c01e-4b6a-a36b-3a9ee24ff6d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.691425407Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa575f6d-42b1-4e4a-bc1a-6476754a752a name=/runtime.v1.RuntimeService/Version
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.691526868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa575f6d-42b1-4e4a-bc1a-6476754a752a name=/runtime.v1.RuntimeService/Version
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.693442237Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0075a912-9f37-43e8-b180-c6a6b507cf0d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.694303421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721745048694268339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0075a912-9f37-43e8-b180-c6a6b507cf0d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.695016319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0b24c1b-1e6b-41ab-b949-a104677c1ddd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.695096300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0b24c1b-1e6b-41ab-b949-a104677c1ddd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.695675855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0b24c1b-1e6b-41ab-b949-a104677c1ddd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.722451537Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7668a5e0-9b75-42d6-976c-a82540ca88a8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.724106777Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-cd87c,Uid:c96075c6-138f-49ca-80af-c75e842c5852,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744742058003287,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:16:46.274827348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-533645,Uid:42cd0a510ca9640dbc5ed62c1d3a4ebd,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1721744722045587866,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{kubernetes.io/config.hash: 42cd0a510ca9640dbc5ed62c1d3a4ebd,kubernetes.io/config.seen: 2024-07-23T14:25:07.427724701Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744718832871155,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{kubectl.kub
ernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-23T14:14:05.803689838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&PodSandboxMetadata{Name:kube-proxy-9wh4w,Uid:d9eb4982-e145-42cf-9a84-6013d7cdd3aa,Namespace:kube-system,Attempt:1,},State:
SANDBOX_READY,CreatedAt:1721744718828003551,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.486258898Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&PodSandboxMetadata{Name:kindnet-99vkr,Uid:495ea524-de15-401d-9ed3-fec375bc8042,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744716803600081,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,k8s-app: kindnet,pod-template-generation: 1,tier: n
ode,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.495076967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrvbf,Uid:ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744715834105539,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.809780225Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-s6xzz,Uid:926a30df-71f1-48d7-92fb-ead057f2504d,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1721744715833196285,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.795958697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-533645,Uid:6de7f3c8e278c087425628d1b79c1d22,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744713890837480,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6de
7f3c8e278c087425628d1b79c1d22,kubernetes.io/config.seen: 2024-07-23T14:13:36.769857865Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-533645,Uid:5693e50c5ce4a113bda653dc5ed85d89,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744708273200417,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.103:8443,kubernetes.io/config.hash: 5693e50c5ce4a113bda653dc5ed85d89,kubernetes.io/config.seen: 2024-07-23T14:13:36.769855212Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Me
tadata:&PodSandboxMetadata{Name:etcd-ha-533645,Uid:0116d3bd9333422ee3ba97043c03c966,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744708270193666,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.103:2379,kubernetes.io/config.hash: 0116d3bd9333422ee3ba97043c03c966,kubernetes.io/config.seen: 2024-07-23T14:13:36.769851172Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-533645,Uid:a779b56396ae961a52b991bf79e41c79,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721744708268780094,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a779b56396ae961a52b991bf79e41c79,kubernetes.io/config.seen: 2024-07-23T14:13:36.769856647Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-cd87c,Uid:c96075c6-138f-49ca-80af-c75e842c5852,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721744207484085195,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:16:46.274827348Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrvbf,Uid:ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721744046119612900,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.809780225Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-s6xzz,Uid:926a30df-71f1-48d7-92fb-ead057f2504d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721744046102839801,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:14:05.795958697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&PodSandboxMetadata{Name:kindnet-99vkr,Uid:495ea524-de15-401d-9ed3-fec375bc8042,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721744029809613826,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.495076967Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&PodSandboxMetadata{Name:kube-proxy-9wh4w,Uid:d9eb4982-e145-42cf-9a84-6013d7cdd3aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721744029807470232,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T14:13:49.486258898Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&PodSandboxMetadata{Name:etcd-ha-533645,Uid:0116d3bd9333422ee3ba97043c03c966,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721744010425473909,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.103:2379,kubernetes.io/config.hash: 0116d3bd9333422ee3ba97043c03c966,kubernetes.io/config.seen: 2024-07-23T14:13:29.926300650Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-533645,Uid:6de7f3c8e278c087425628d1b79c1d22,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721744010404618151,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6de7f3c8
e278c087425628d1b79c1d22,kubernetes.io/config.seen: 2024-07-23T14:13:29.926308682Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7668a5e0-9b75-42d6-976c-a82540ca88a8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.725637325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bca9f9f-f405-4176-80ae-423a385abb41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.725717068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bca9f9f-f405-4176-80ae-423a385abb41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.727097929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bca9f9f-f405-4176-80ae-423a385abb41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.751288555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da6ef49b-7dc5-487d-baab-043a279565b1 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.751366809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da6ef49b-7dc5-487d-baab-043a279565b1 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.753006821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e48065e6-e7b9-447d-9cef-c6c3a30c9902 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.753742639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721745048753716948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e48065e6-e7b9-447d-9cef-c6c3a30c9902 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.754389755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5318da43-9d1c-4386-bd4a-e34e40b0e1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.754444652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5318da43-9d1c-4386-bd4a-e34e40b0e1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:30:48 ha-533645 crio[3705]: time="2024-07-23 14:30:48.755078718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd52126455865a0a9cacee973d03f20a0417f2af1cffe1698d70f8b885a19bfe,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721744788799677839,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721744748803924925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721744744807633701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62bf4276e719ae17efc149151d00acc1c68f7edb2b559da399e7c840799cf2,PodSandboxId:a40b6778e0792e61527b4a492f4fe8bcbcd6d7eb641484c0b7737f6384572847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721744743801691383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52ab05ba-6dfc-4cc6-9085-8632f5cd7a66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5a8d22,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca3498bbe37535941a23c74fdf3f95de9d0422cf9d9085805d087605bd1992b,PodSandboxId:e9e3414356d26abd66fe52980a8d7d3053f46425580dbbbdbd16e8ad22631e68,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721744742205985178,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annotations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b777505a2c50f7278574ca4cbecf300199924e79ea34aa034b299fd108a7f08,PodSandboxId:6a75d2cb8f8e7ef983432770446871d75d6df48e93fea264553b56a808d32532,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721744722121314162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42cd0a510ca9640dbc5ed62c1d3a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e,PodSandboxId:b9b9a76367d4537e6fbea03553756ef95a25859ec5d8481175a05a804a2f02f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721744718948068475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb,PodSandboxId:9549aad58e1e2f7ee7ddbd0bbea39a894275e3ec39b5af6a51acc8873616ba8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721744717156179719,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4daeb85
c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504,PodSandboxId:3317a972a9fb97d91454ad5300da5a448b06d8476db020635cc0127205dc7528,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744716030928066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d,PodSandboxId:b21bb2d4573f43b4e6acdc3b9a25ba8a501967baf0dc4b21683897464867fc9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721744715993724081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db,PodSandboxId:a7166d544175053c1e090f8ff8d498a5598a0a2466a411c2df82d751f6aff35f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721744713980326382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54,PodSandboxId:24d42cb054406d28164aa0ff12de61722997ea6b8d6952f731a8eab3e14d55c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721744708479347986,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-533645,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: a779b56396ae961a52b991bf79e41c79,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f,PodSandboxId:e0bca7366951af0fe5ad76ffca2bf56a0baa8188880b9b03cd86c0e1c74a4dd1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721744708425093443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5693e50c5ce4a113bda653dc5ed85d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2164da0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2,PodSandboxId:7e5a095202b51ba46c1fb30e0ec734f83360ab0c9ce0c87807810d2481bbe68e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721744708419697999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Ann
otations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ba0f9525e42116f68938091ad5dab79e29bd9255ef81df1cb078c4f6ddcadb,PodSandboxId:8e48b2467dce80a1b812e1924b4ad098fe457de72347b26234e430ce3b1a2e99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721744210279814229,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cd87c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c96075c6-138f-49ca-80af-c75e842c5852,},Annot
ations:map[string]string{io.kubernetes.container.hash: ab81262c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219,PodSandboxId:67e32a92d8db3ab2bf45f9266b685a18187dcdd0c656df26458f1b1d2e423427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046410206441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nrvbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0a4d6f-76cd-44de-a0ba-db0db7fce7ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8f44e137,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46,PodSandboxId:a7feedf1d20d0b270b3b2503cda076179d8b1706a59b6b4b671de60f21434785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721744046339921179,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s6xzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926a30df-71f1-48d7-92fb-ead057f2504d,},Annotations:map[string]string{io.kubernetes.container.hash: b79d2c0d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493,PodSandboxId:08c39cde805a7f1102a6810a1a2de553fde5d35aa1459896da160c5f46a1aa97,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721744034722760931,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99vkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495ea524-de15-401d-9ed3-fec375bc8042,},Annotations:map[string]string{io.kubernetes.container.hash: dfbed60b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e,PodSandboxId:8cb09524a9c810ee67f6d4cbdf138868361b89c647f21ee794117f5fde6ff384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721744030096405491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wh4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eb4982-e145-42cf-9a84-6013d7cdd3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 3480dc97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e,PodSandboxId:5d23d91d7b6c34c0ef13d275be44b9cf61ec35e25ea37a391c42f6e85442fa0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721744010678956358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0116d3bd9333422ee3ba97043c03c966,},Annotations:map[string]string{io.kubernetes.container.hash: 39e0d376,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090,PodSandboxId:17bfeff63e98487bb969febbc81c6cd43d4356aa3e6a0dc14991d6389263d0bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721744010650743943,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-533645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6de7f3c8e278c087425628d1b79c1d22,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5318da43-9d1c-4386-bd4a-e34e40b0e1f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd52126455865       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   a40b6778e0792       storage-provisioner
	a56fd142b0b4e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Running             kube-apiserver            3                   e0bca7366951a       kube-apiserver-ha-533645
	95b833ba6bc09       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Running             kube-controller-manager   2                   24d42cb054406       kube-controller-manager-ha-533645
	7d62bf4276e71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   a40b6778e0792       storage-provisioner
	3ca3498bbe375       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   e9e3414356d26       busybox-fc5497c4f-cd87c
	0b777505a2c50       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   6a75d2cb8f8e7       kube-vip-ha-533645
	46676ad486f94       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   b9b9a76367d45       kube-proxy-9wh4w
	8b31e93d3dd22       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   9549aad58e1e2       kindnet-99vkr
	e4daeb85c3ac6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   3317a972a9fb9       coredns-7db6d8ff4d-s6xzz
	1128fcbd5591d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   b21bb2d4573f4       coredns-7db6d8ff4d-nrvbf
	fa91958d57171       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   a7166d5441750       kube-scheduler-ha-533645
	bb05f37daa7f4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   24d42cb054406       kube-controller-manager-ha-533645
	2c6f6682e15ed       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   e0bca7366951a       kube-apiserver-ha-533645
	063d43d9b55be       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   7e5a095202b51       etcd-ha-533645
	01ba0f9525e42       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   8e48b2467dce8       busybox-fc5497c4f-cd87c
	875e4306cadef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   67e32a92d8db3       coredns-7db6d8ff4d-nrvbf
	c272094e83046       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   a7feedf1d20d0       coredns-7db6d8ff4d-s6xzz
	204bd8ec5a070       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   08c39cde805a7       kindnet-99vkr
	1d5b9787b76de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   8cb09524a9c81       kube-proxy-9wh4w
	081aaa8c6121c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   5d23d91d7b6c3       etcd-ha-533645
	7972ddd5dc32d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   17bfeff63e984       kube-scheduler-ha-533645
	
	
	==> coredns [1128fcbd5591d4c2c6af086019f70f14a4da1a9b30ec30e9ad0ccd81ceb4dc6d] <==
	Trace[174020751]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:25:28.975)
	Trace[174020751]: [10.001772994s] [10.001772994s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51674->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51674->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [875e4306cadef96a80b4b315fabb5056b0cb5a9255b96edb0666c8bcd8860219] <==
	[INFO] 10.244.0.4:49583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187618s
	[INFO] 10.244.0.4:47929 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087678s
	[INFO] 10.244.2.2:38089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189381s
	[INFO] 10.244.2.2:42424 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002105089s
	[INFO] 10.244.2.2:44423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066747s
	[INFO] 10.244.1.2:32850 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770779s
	[INFO] 10.244.1.2:53620 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074588s
	[INFO] 10.244.1.2:33169 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009318s
	[INFO] 10.244.0.4:47876 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009475s
	[INFO] 10.244.2.2:42045 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092251s
	[INFO] 10.244.2.2:58530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137054s
	[INFO] 10.244.1.2:36698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167251s
	[INFO] 10.244.1.2:56144 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082378s
	[INFO] 10.244.1.2:37800 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138485s
	[INFO] 10.244.0.4:35800 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198717s
	[INFO] 10.244.0.4:55540 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113741s
	[INFO] 10.244.0.4:40041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000256677s
	[INFO] 10.244.1.2:51609 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132031s
	[INFO] 10.244.1.2:56610 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00023971s
	[INFO] 10.244.1.2:42525 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084914s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c272094e830461d10881fa34f0047514788d3eea8b89f3cca8e646a5a0b99a46] <==
	[INFO] 10.244.2.2:36170 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001374503s
	[INFO] 10.244.2.2:32919 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148684s
	[INFO] 10.244.2.2:33222 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130497s
	[INFO] 10.244.1.2:41720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132072s
	[INFO] 10.244.1.2:46039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136478s
	[INFO] 10.244.1.2:42265 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246596s
	[INFO] 10.244.1.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106745s
	[INFO] 10.244.1.2:42065 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173598s
	[INFO] 10.244.0.4:49694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097989s
	[INFO] 10.244.0.4:55332 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105679s
	[INFO] 10.244.0.4:55778 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057634s
	[INFO] 10.244.2.2:46643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151446s
	[INFO] 10.244.2.2:47656 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125295s
	[INFO] 10.244.1.2:33099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116864s
	[INFO] 10.244.0.4:43829 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233901s
	[INFO] 10.244.2.2:39898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180683s
	[INFO] 10.244.2.2:53185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148942s
	[INFO] 10.244.2.2:36301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000319769s
	[INFO] 10.244.2.2:54739 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011416s
	[INFO] 10.244.1.2:40740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148117s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4daeb85c3ac62fb2687884dddd4764be21a65af28a8ab335d0d4a5b2c295504] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-533645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_13_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:30:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:29:12 +0000   Tue, 23 Jul 2024 14:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:29:12 +0000   Tue, 23 Jul 2024 14:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:29:12 +0000   Tue, 23 Jul 2024 14:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:29:12 +0000   Tue, 23 Jul 2024 14:29:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-533645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 016f247620dd4139a26ce62f3129dde1
	  System UUID:                016f2476-20dd-4139-a26c-e62f3129dde1
	  Boot ID:                    218264a1-e12e-486d-a0c2-4ec59bc9cd30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cd87c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-nrvbf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-s6xzz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-533645                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-99vkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-533645             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-533645    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-9wh4w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-533645             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-533645                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m48s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)      kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                    node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Warning  ContainerGCFailed        6m13s (x2 over 7m13s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   RegisteredNode           4m46s                  node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-533645 event: Registered Node ha-533645 in Controller
	  Normal   NodeNotReady             108s                   node-controller  Node ha-533645 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     97s (x2 over 17m)      kubelet          Node ha-533645 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    97s (x2 over 17m)      kubelet          Node ha-533645 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                97s (x2 over 16m)      kubelet          Node ha-533645 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  97s (x2 over 17m)      kubelet          Node ha-533645 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-533645-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_15_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:15:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:30:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:26:37 +0000   Tue, 23 Jul 2024 14:25:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    ha-533645-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 024bddfd48eb471b960e0dab2d3cd45b
	  System UUID:                024bddfd-48eb-471b-960e-0dab2d3cd45b
	  Boot ID:                    f5b66f61-31e7-4590-a690-1a9245df56a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tlvlp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-533645-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-95sfh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-533645-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-533645-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-p25cg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-533645-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-533645-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m46s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-533645-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-533645-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-533645-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-533645-m02 status is now: NodeNotReady
	  Normal  Starting                 5m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-533645-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-533645-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-533645-m02 event: Registered Node ha-533645-m02 in Controller
	
	
	Name:               ha-533645-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-533645-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=ha-533645
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_17_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:17:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-533645-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:28:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 23 Jul 2024 14:28:01 +0000   Tue, 23 Jul 2024 14:29:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    ha-533645-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d58ceb89e2492c9f4ada3b3365c263
	  System UUID:                c6d58ceb-89e2-492c-9f4a-da3b3365c263
	  Boot ID:                    7c9b0a32-693f-4200-8920-455f96c741ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kkrzj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-f4tkn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-nz528           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-533645-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-533645-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-533645-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-533645-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   RegisteredNode           4m46s                  node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   NodeNotReady             4m8s                   node-controller  Node ha-533645-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-533645-m04 event: Registered Node ha-533645-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-533645-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-533645-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-533645-m04 has been rebooted, boot id: 7c9b0a32-693f-4200-8920-455f96c741ac
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-533645-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-533645-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.424464] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.065789] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058371] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.157255] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.139843] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.253665] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.906302] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.745369] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.058504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.271647] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.077951] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.844081] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.054308] kauditd_printk_skb: 34 callbacks suppressed
	[Jul23 14:15] kauditd_printk_skb: 24 callbacks suppressed
	[Jul23 14:25] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +0.151028] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.197525] systemd-fstab-generator[3651]: Ignoring "noauto" option for root device
	[  +0.146196] systemd-fstab-generator[3663]: Ignoring "noauto" option for root device
	[  +0.265504] systemd-fstab-generator[3691]: Ignoring "noauto" option for root device
	[  +4.858652] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.082505] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.370996] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.054970] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.413995] kauditd_printk_skb: 12 callbacks suppressed
	[ +17.846041] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [063d43d9b55be848f835afbea0bc140f1ca6eab7b3ad0cbd6533b0669251b1d2] <==
	{"level":"warn","ts":"2024-07-23T14:27:18.790909Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-23T14:27:18.79101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2ed9c9959a67d1c2","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-23T14:27:26.736833Z","caller":"traceutil/trace.go:171","msg":"trace[1708522923] transaction","detail":"{read_only:false; response_revision:2550; number_of_response:1; }","duration":"141.808451ms","start":"2024-07-23T14:27:26.594762Z","end":"2024-07-23T14:27:26.73657Z","steps":["trace[1708522923] 'process raft request'  (duration: 141.727567ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:27:28.773379Z","caller":"traceutil/trace.go:171","msg":"trace[2043252536] linearizableReadLoop","detail":"{readStateIndex:2987; appliedIndex:2987; }","duration":"118.562988ms","start":"2024-07-23T14:27:28.654777Z","end":"2024-07-23T14:27:28.77334Z","steps":["trace[2043252536] 'read index received'  (duration: 118.557396ms)","trace[2043252536] 'applied index is now lower than readState.Index'  (duration: 4.362µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:27:28.773617Z","caller":"traceutil/trace.go:171","msg":"trace[1923459605] transaction","detail":"{read_only:false; response_revision:2561; number_of_response:1; }","duration":"159.941698ms","start":"2024-07-23T14:27:28.613659Z","end":"2024-07-23T14:27:28.773601Z","steps":["trace[1923459605] 'process raft request'  (duration: 159.818321ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:27:28.774012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.112974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-533645-m03\" ","response":"range_response_count:1 size:5803"}
	{"level":"info","ts":"2024-07-23T14:27:28.774197Z","caller":"traceutil/trace.go:171","msg":"trace[321309753] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-533645-m03; range_end:; response_count:1; response_revision:2561; }","duration":"119.42965ms","start":"2024-07-23T14:27:28.654749Z","end":"2024-07-23T14:27:28.774179Z","steps":["trace[321309753] 'agreement among raft nodes before linearized reading'  (duration: 119.018481ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:28:15.059797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"836b637e1db3e16e switched to configuration voters=(3376388288052943808 9469772034791956846)"}
	{"level":"info","ts":"2024-07-23T14:28:15.062083Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"58a1f21afce1a625","local-member-id":"836b637e1db3e16e","removed-remote-peer-id":"2ed9c9959a67d1c2","removed-remote-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-07-23T14:28:15.062231Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"warn","ts":"2024-07-23T14:28:15.066369Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:28:15.06643Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"warn","ts":"2024-07-23T14:28:15.073579Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:28:15.073682Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:28:15.073807Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"warn","ts":"2024-07-23T14:28:15.07407Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2","error":"context canceled"}
	{"level":"warn","ts":"2024-07-23T14:28:15.074238Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2ed9c9959a67d1c2","error":"failed to read 2ed9c9959a67d1c2 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-23T14:28:15.074318Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"warn","ts":"2024-07-23T14:28:15.0745Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2","error":"context canceled"}
	{"level":"info","ts":"2024-07-23T14:28:15.074548Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:28:15.074599Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:28:15.07466Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"836b637e1db3e16e","removed-remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:28:15.074742Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"836b637e1db3e16e","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"2ed9c9959a67d1c2"}
	{"level":"warn","ts":"2024-07-23T14:28:15.128454Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.127:33728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-23T14:28:15.129527Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.127:33734","server-name":"","error":"EOF"}
	
	
	==> etcd [081aaa8c6121cf72755ce793310660061a66084558c18a69e5e363d0bafeb04e] <==
	{"level":"info","ts":"2024-07-23T14:23:30.339588Z","caller":"traceutil/trace.go:171","msg":"trace[1562796131] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; }","duration":"7.740614453s","start":"2024-07-23T14:23:22.598967Z","end":"2024-07-23T14:23:30.339582Z","steps":["trace[1562796131] 'agreement among raft nodes before linearized reading'  (duration: 7.740572136s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:23:30.339611Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:23:22.598964Z","time spent":"7.740640387s","remote":"127.0.0.1:34750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	2024/07/23 14:23:30 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-23T14:23:30.335111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:23:29.600387Z","time spent":"734.720796ms","remote":"127.0.0.1:35178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	2024/07/23 14:23:30 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-23T14:23:30.404539Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.103:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:23:30.40459Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.103:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T14:23:30.404651Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"836b637e1db3e16e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-23T14:23:30.404847Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.404908Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.404989Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405184Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405243Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405282Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405293Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2edb5742552f5bc0"}
	{"level":"info","ts":"2024-07-23T14:23:30.405298Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405307Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405342Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405444Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405526Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405592Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"836b637e1db3e16e","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.405642Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2ed9c9959a67d1c2"}
	{"level":"info","ts":"2024-07-23T14:23:30.408544Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.103:2380"}
	{"level":"info","ts":"2024-07-23T14:23:30.408664Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.103:2380"}
	{"level":"info","ts":"2024-07-23T14:23:30.408687Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-533645","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.103:2380"],"advertise-client-urls":["https://192.168.39.103:2379"]}
	
	
	==> kernel <==
	 14:30:49 up 17 min,  0 users,  load average: 0.10, 0.28, 0.24
	Linux ha-533645 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [204bd8ec5a070f89eb23c87809788650b5edd00d54659e9ddd68dfece6e87493] <==
	I0723 14:22:55.722969       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:23:05.723271       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:23:05.723336       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:23:05.723570       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:23:05.723595       1 main.go:299] handling current node
	I0723 14:23:05.723610       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:23:05.723615       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:23:05.723678       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:23:05.723682       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:23:15.722832       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:23:15.722883       1 main.go:299] handling current node
	I0723 14:23:15.722927       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:23:15.722940       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:23:15.723227       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:23:15.723253       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:23:15.723359       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:23:15.723380       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:23:25.727722       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:23:25.727851       1 main.go:299] handling current node
	I0723 14:23:25.727884       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:23:25.727906       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:23:25.728257       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0723 14:23:25.728342       1 main.go:322] Node ha-533645-m03 has CIDR [10.244.2.0/24] 
	I0723 14:23:25.728449       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:23:25.728474       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [8b31e93d3dd22c71b51dcd6307e2c2cc69d86f1b915425eff8eb04f9fa1c11cb] <==
	I0723 14:30:08.132711       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:30:18.132955       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:30:18.133056       1 main.go:299] handling current node
	I0723 14:30:18.133078       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:30:18.133088       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:30:18.133260       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:30:18.133283       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:30:28.137257       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:30:28.137315       1 main.go:299] handling current node
	I0723 14:30:28.137345       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:30:28.137353       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:30:28.137527       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:30:28.137550       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:30:38.137199       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:30:38.137349       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:30:38.137514       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:30:38.137540       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:30:38.137601       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:30:38.137620       1 main.go:299] handling current node
	I0723 14:30:48.140245       1 main.go:295] Handling node with IPs: map[192.168.39.182:{}]
	I0723 14:30:48.140305       1 main.go:322] Node ha-533645-m02 has CIDR [10.244.1.0/24] 
	I0723 14:30:48.140477       1 main.go:295] Handling node with IPs: map[192.168.39.162:{}]
	I0723 14:30:48.140503       1 main.go:322] Node ha-533645-m04 has CIDR [10.244.3.0/24] 
	I0723 14:30:48.140617       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0723 14:30:48.140644       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2c6f6682e15ed5b1b8ba5abc5df63e6aae49a573fba9fcd1843849f7012ec80f] <==
	I0723 14:25:08.714551       1 options.go:221] external host was not specified, using 192.168.39.103
	I0723 14:25:08.716290       1 server.go:148] Version: v1.30.3
	I0723 14:25:08.716334       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:25:09.157571       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0723 14:25:09.164435       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 14:25:09.179685       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0723 14:25:09.186572       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0723 14:25:09.186907       1 instance.go:299] Using reconciler: lease
	W0723 14:25:29.155607       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0723 14:25:29.156932       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0723 14:25:29.187968       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0723 14:25:29.188105       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a56fd142b0b4ee78f8e1b3e4324d2f184c28b2cb45138959acd898c3760c3491] <==
	I0723 14:25:50.637441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0723 14:25:50.637497       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0723 14:25:50.637595       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0723 14:25:50.714195       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0723 14:25:50.714263       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0723 14:25:50.714204       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0723 14:25:50.714245       1 shared_informer.go:320] Caches are synced for configmaps
	I0723 14:25:50.717096       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 14:25:50.717966       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 14:25:50.722880       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0723 14:25:50.729951       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.127 192.168.39.182]
	I0723 14:25:50.738065       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 14:25:50.738094       1 aggregator.go:165] initial CRD sync complete...
	I0723 14:25:50.738113       1 autoregister_controller.go:141] Starting autoregister controller
	I0723 14:25:50.738152       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 14:25:50.738157       1 cache.go:39] Caches are synced for autoregister controller
	I0723 14:25:50.757955       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 14:25:50.761187       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 14:25:50.761223       1 policy_source.go:224] refreshing policies
	I0723 14:25:50.796329       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 14:25:50.832093       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:25:50.840099       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0723 14:25:50.843751       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0723 14:25:51.622016       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0723 14:25:52.160871       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.127 192.168.39.182]
	
	
	==> kube-controller-manager [95b833ba6bc090ca533bacc1535fbd1bba6cb078cf1d39d4dcb12bb06a946c6f] <==
	I0723 14:29:01.294751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.488987ms"
	I0723 14:29:01.295589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.375µs"
	I0723 14:29:01.388161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.382438ms"
	I0723 14:29:01.388422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.305µs"
	I0723 14:29:01.417500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.791113ms"
	I0723 14:29:01.418331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="227.485µs"
	I0723 14:29:01.491773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.538382ms"
	I0723 14:29:01.492164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.875µs"
	E0723 14:29:03.805665       1 gc_controller.go:153] "Failed to get node" err="node \"ha-533645-m03\" not found" logger="pod-garbage-collector-controller" node="ha-533645-m03"
	E0723 14:29:03.805703       1 gc_controller.go:153] "Failed to get node" err="node \"ha-533645-m03\" not found" logger="pod-garbage-collector-controller" node="ha-533645-m03"
	E0723 14:29:03.805713       1 gc_controller.go:153] "Failed to get node" err="node \"ha-533645-m03\" not found" logger="pod-garbage-collector-controller" node="ha-533645-m03"
	E0723 14:29:03.805720       1 gc_controller.go:153] "Failed to get node" err="node \"ha-533645-m03\" not found" logger="pod-garbage-collector-controller" node="ha-533645-m03"
	E0723 14:29:03.805726       1 gc_controller.go:153] "Failed to get node" err="node \"ha-533645-m03\" not found" logger="pod-garbage-collector-controller" node="ha-533645-m03"
	I0723 14:29:04.025596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.264894ms"
	I0723 14:29:04.026238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.794µs"
	I0723 14:29:16.990856       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h9krx\": the object has been modified; please apply your changes to the latest version and try again"
	I0723 14:29:16.991424       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c6785928-a46e-422c-892f-7d7089b74c17", APIVersion:"v1", ResourceVersion:"299", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h9krx": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:29:17.051623       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-h9krx\": the object has been modified; please apply your changes to the latest version and try again"
	I0723 14:29:17.052608       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c6785928-a46e-422c-892f-7d7089b74c17", APIVersion:"v1", ResourceVersion:"299", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-h9krx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-h9krx": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:29:17.052805       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="78.351384ms"
	E0723 14:29:17.052856       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:29:17.139004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.07067ms"
	I0723 14:29:17.139253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="156.415µs"
	I0723 14:29:17.140620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.255452ms"
	I0723 14:29:17.140984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="227.29µs"
	
	
	==> kube-controller-manager [bb05f37daa7f4a1adcae07e66f6baf4dd02e9e4aea425cc780869801db49fc54] <==
	I0723 14:25:09.201580       1 serving.go:380] Generated self-signed cert in-memory
	I0723 14:25:09.682342       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0723 14:25:09.682382       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:25:09.683970       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0723 14:25:09.684023       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0723 14:25:09.684318       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0723 14:25:09.684565       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0723 14:25:30.195156       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.103:8443/healthz\": dial tcp 192.168.39.103:8443: connect: connection refused"
	
	
	==> kube-proxy [1d5b9787b76decdd21159640f6ade1ac40591057c4b3fa0ca6519ed722bad40e] <==
	E0723 14:22:23.865675       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:26.937077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:26.937683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:26.938655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:26.938738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:26.938841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:26.938871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:33.080842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:33.080900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:33.080970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:33.081003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:33.081059       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:33.081088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:42.298295       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:42.298430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:45.370298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:45.370359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:22:48.441994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:22:48.442178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:23:03.801775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:23:03.801895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-533645&resourceVersion=1978": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:23:03.802041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:23:03.802150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0723 14:23:09.945175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0723 14:23:09.945851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [46676ad486f94a3f463bd84ec1509de43b7e428188c3865cf985ca8a9c32ed0e] <==
	I0723 14:25:19.100009       1 server_linux.go:69] "Using iptables proxy"
	E0723 14:25:22.041571       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:25.113680       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:28.185512       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:34.329710       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0723 14:25:43.544958       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0723 14:26:00.693936       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0723 14:26:00.751861       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:26:00.751961       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:26:00.752004       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:26:00.756930       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:26:00.758599       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:26:00.758979       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:26:00.760574       1 config.go:192] "Starting service config controller"
	I0723 14:26:00.760654       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:26:00.760713       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:26:00.760731       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:26:00.761478       1 config.go:319] "Starting node config controller"
	I0723 14:26:00.777252       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:26:00.861589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:26:00.862795       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:26:00.878928       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7972ddd5dc32d45f0ba4ef9fed42b03472f223384d0d2c716274a88fc10a8090] <==
	W0723 14:23:23.045772       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 14:23:23.045869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:23:23.112473       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:23:23.112549       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:23:23.119687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:23:23.119783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:23:23.231252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:23:23.231339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:23:23.388384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 14:23:23.388429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 14:23:23.390500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 14:23:23.390572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 14:23:24.101389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:23:24.101519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:23:24.289841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:24.289929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:24.757516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:24.757581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:25.235301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:25.235384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:25.364313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 14:23:25.364358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 14:23:25.821567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:23:25.821620       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:23:30.301802       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fa91958d57171cb6c27ede626a74eff15a7a96440583b91067d261022b16e2db] <==
	W0723 14:25:46.140863       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:46.140964       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:46.574697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:46.574775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:47.092462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:47.092571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:47.521751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:47.521862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:48.267820       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:48.267904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:48.457289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:48.457425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:48.621000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	E0723 14:25:48.621213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.103:8443: connect: connection refused
	W0723 14:25:50.662618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:25:50.662718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:25:50.662805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:25:50.662842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:25:50.662930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:25:50.662962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:26:07.102543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0723 14:28:11.709531       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kkrzj\": pod busybox-fc5497c4f-kkrzj is already assigned to node \"ha-533645-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kkrzj" node="ha-533645-m04"
	E0723 14:28:11.709791       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cf837f06-87f0-4b63-b82c-30ad2d88b85f(default/busybox-fc5497c4f-kkrzj) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kkrzj"
	E0723 14:28:11.709832       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kkrzj\": pod busybox-fc5497c4f-kkrzj is already assigned to node \"ha-533645-m04\"" pod="default/busybox-fc5497c4f-kkrzj"
	I0723 14:28:11.709900       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kkrzj" node="ha-533645-m04"
	
	
	==> kubelet <==
	Jul 23 14:28:59 ha-533645 kubelet[1366]: E0723 14:28:59.413440    1366 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-533645?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 23 14:29:05 ha-533645 kubelet[1366]: E0723 14:29:05.999412    1366 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-533645\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 23 14:29:09 ha-533645 kubelet[1366]: E0723 14:29:09.414672    1366 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-533645?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.524883    1366 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: E0723 14:29:12.524941    1366 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-533645\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-533645?timeout=10s\": http2: client connection lost"
	Jul 23 14:29:12 ha-533645 kubelet[1366]: E0723 14:29:12.525182    1366 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-533645?timeout=10s\": http2: client connection lost"
	Jul 23 14:29:12 ha-533645 kubelet[1366]: I0723 14:29:12.525628    1366 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.524981    1366 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.525000    1366 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.525022    1366 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.525045    1366 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.525062    1366 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.525074    1366 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.525090    1366 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:12 ha-533645 kubelet[1366]: W0723 14:29:12.525105    1366 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 23 14:29:36 ha-533645 kubelet[1366]: E0723 14:29:36.827331    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:29:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:29:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:29:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:29:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:30:36 ha-533645 kubelet[1366]: E0723 14:30:36.826783    1366 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:30:36 ha-533645 kubelet[1366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:30:36 ha-533645 kubelet[1366]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:30:36 ha-533645 kubelet[1366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:30:36 ha-533645 kubelet[1366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 14:30:48.321524   38875 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19319-11303/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-533645 -n ha-533645
helpers_test.go:261: (dbg) Run:  kubectl --context ha-533645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-574866
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-574866
E0723 14:47:11.818846   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-574866: exit status 82 (2m1.798484945s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-574866-m03"  ...
	* Stopping node "multinode-574866-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-574866" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-574866 --wait=true -v=8 --alsologtostderr
E0723 14:49:49.700324   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:50:14.863958   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-574866 --wait=true -v=8 --alsologtostderr: (3m18.321752542s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-574866
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-574866 -n multinode-574866
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-574866 logs -n 25: (1.439054223s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile418850268/001/cp-test_multinode-574866-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866:/home/docker/cp-test_multinode-574866-m02_multinode-574866.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866 sudo cat                                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m02_multinode-574866.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03:/home/docker/cp-test_multinode-574866-m02_multinode-574866-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866-m03 sudo cat                                   | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m02_multinode-574866-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp testdata/cp-test.txt                                                | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile418850268/001/cp-test_multinode-574866-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866:/home/docker/cp-test_multinode-574866-m03_multinode-574866.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866 sudo cat                                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m03_multinode-574866.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02:/home/docker/cp-test_multinode-574866-m03_multinode-574866-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866-m02 sudo cat                                   | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m03_multinode-574866-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-574866 node stop m03                                                          | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:45 UTC |
	| node    | multinode-574866 node start                                                             | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:45 UTC | 23 Jul 24 14:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-574866                                                                | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:45 UTC |                     |
	| stop    | -p multinode-574866                                                                     | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:45 UTC |                     |
	| start   | -p multinode-574866                                                                     | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:47 UTC | 23 Jul 24 14:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-574866                                                                | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:47:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:47:43.368643   48243 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:47:43.368911   48243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:47:43.368920   48243 out.go:304] Setting ErrFile to fd 2...
	I0723 14:47:43.368926   48243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:47:43.369109   48243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:47:43.369675   48243 out.go:298] Setting JSON to false
	I0723 14:47:43.370616   48243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5409,"bootTime":1721740654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:47:43.370672   48243 start.go:139] virtualization: kvm guest
	I0723 14:47:43.372846   48243 out.go:177] * [multinode-574866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 14:47:43.374239   48243 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:47:43.374280   48243 notify.go:220] Checking for updates...
	I0723 14:47:43.376983   48243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:47:43.378304   48243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:47:43.379517   48243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:47:43.380900   48243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:47:43.382108   48243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:47:43.383738   48243 config.go:182] Loaded profile config "multinode-574866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:47:43.383878   48243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:47:43.384347   48243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:47:43.384402   48243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:47:43.400623   48243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0723 14:47:43.400994   48243 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:47:43.401462   48243 main.go:141] libmachine: Using API Version  1
	I0723 14:47:43.401480   48243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:47:43.401921   48243 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:47:43.402122   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:47:43.437618   48243 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 14:47:43.439038   48243 start.go:297] selected driver: kvm2
	I0723 14:47:43.439055   48243 start.go:901] validating driver "kvm2" against &{Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:47:43.439278   48243 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:47:43.439701   48243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:47:43.439785   48243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 14:47:43.454892   48243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 14:47:43.455710   48243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:47:43.455741   48243 cni.go:84] Creating CNI manager for ""
	I0723 14:47:43.455747   48243 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0723 14:47:43.455821   48243 start.go:340] cluster config:
	{Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-574866 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:47:43.455968   48243 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:47:43.457886   48243 out.go:177] * Starting "multinode-574866" primary control-plane node in "multinode-574866" cluster
	I0723 14:47:43.459144   48243 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:47:43.459174   48243 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 14:47:43.459181   48243 cache.go:56] Caching tarball of preloaded images
	I0723 14:47:43.459252   48243 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:47:43.459263   48243 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:47:43.459380   48243 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/config.json ...
	I0723 14:47:43.459564   48243 start.go:360] acquireMachinesLock for multinode-574866: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:47:43.459600   48243 start.go:364] duration metric: took 20.98µs to acquireMachinesLock for "multinode-574866"
	I0723 14:47:43.459613   48243 start.go:96] Skipping create...Using existing machine configuration
	I0723 14:47:43.459623   48243 fix.go:54] fixHost starting: 
	I0723 14:47:43.459866   48243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:47:43.459894   48243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:47:43.474100   48243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0723 14:47:43.474540   48243 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:47:43.475074   48243 main.go:141] libmachine: Using API Version  1
	I0723 14:47:43.475101   48243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:47:43.475455   48243 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:47:43.475637   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:47:43.475822   48243 main.go:141] libmachine: (multinode-574866) Calling .GetState
	I0723 14:47:43.477578   48243 fix.go:112] recreateIfNeeded on multinode-574866: state=Running err=<nil>
	W0723 14:47:43.477610   48243 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 14:47:43.480746   48243 out.go:177] * Updating the running kvm2 "multinode-574866" VM ...
	I0723 14:47:43.482210   48243 machine.go:94] provisionDockerMachine start ...
	I0723 14:47:43.482231   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:47:43.482486   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.485066   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.485590   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.485617   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.485737   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.485896   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.486070   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.486233   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.486406   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:43.486614   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:43.486627   48243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 14:47:43.599287   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-574866
	
	I0723 14:47:43.599321   48243 main.go:141] libmachine: (multinode-574866) Calling .GetMachineName
	I0723 14:47:43.599544   48243 buildroot.go:166] provisioning hostname "multinode-574866"
	I0723 14:47:43.599566   48243 main.go:141] libmachine: (multinode-574866) Calling .GetMachineName
	I0723 14:47:43.599763   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.602642   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.602956   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.602973   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.603151   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.603322   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.603456   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.603567   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.603736   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:43.603930   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:43.603944   48243 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-574866 && echo "multinode-574866" | sudo tee /etc/hostname
	I0723 14:47:43.725083   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-574866
	
	I0723 14:47:43.725118   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.728059   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.728452   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.728486   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.728610   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.728789   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.728954   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.729085   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.729235   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:43.729401   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:43.729416   48243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-574866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-574866/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-574866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:47:43.839175   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:47:43.839204   48243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:47:43.839230   48243 buildroot.go:174] setting up certificates
	I0723 14:47:43.839241   48243 provision.go:84] configureAuth start
	I0723 14:47:43.839253   48243 main.go:141] libmachine: (multinode-574866) Calling .GetMachineName
	I0723 14:47:43.839555   48243 main.go:141] libmachine: (multinode-574866) Calling .GetIP
	I0723 14:47:43.842074   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.842441   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.842468   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.842643   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.844897   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.845312   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.845339   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.845400   48243 provision.go:143] copyHostCerts
	I0723 14:47:43.845433   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:47:43.845465   48243 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:47:43.845475   48243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:47:43.845540   48243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:47:43.845635   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:47:43.845660   48243 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:47:43.845667   48243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:47:43.845691   48243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:47:43.845745   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:47:43.845760   48243 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:47:43.845769   48243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:47:43.845795   48243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:47:43.845851   48243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.multinode-574866 san=[127.0.0.1 192.168.39.146 localhost minikube multinode-574866]
	I0723 14:47:43.900898   48243 provision.go:177] copyRemoteCerts
	I0723 14:47:43.900963   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:47:43.900987   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.903874   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.904218   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.904249   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.904446   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.904629   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.904785   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.904882   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:47:43.990360   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:47:43.990440   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0723 14:47:44.016378   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:47:44.016466   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 14:47:44.041436   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:47:44.041536   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:47:44.069079   48243 provision.go:87] duration metric: took 229.826098ms to configureAuth
	I0723 14:47:44.069105   48243 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:47:44.069308   48243 config.go:182] Loaded profile config "multinode-574866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:47:44.069389   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:44.072288   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:44.072673   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:44.072702   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:44.072888   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:44.073101   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:44.073272   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:44.073492   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:44.073684   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:44.073846   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:44.073860   48243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:49:14.729858   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:49:14.729885   48243 machine.go:97] duration metric: took 1m31.247661262s to provisionDockerMachine
	I0723 14:49:14.729900   48243 start.go:293] postStartSetup for "multinode-574866" (driver="kvm2")
	I0723 14:49:14.729914   48243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:49:14.729934   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.730305   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:49:14.730362   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.733509   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.733934   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.733954   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.734121   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.734297   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.734511   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.734678   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:49:14.822034   48243 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:49:14.826358   48243 command_runner.go:130] > NAME=Buildroot
	I0723 14:49:14.826397   48243 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0723 14:49:14.826404   48243 command_runner.go:130] > ID=buildroot
	I0723 14:49:14.826411   48243 command_runner.go:130] > VERSION_ID=2023.02.9
	I0723 14:49:14.826418   48243 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0723 14:49:14.826471   48243 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:49:14.826502   48243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:49:14.826576   48243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:49:14.826817   48243 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:49:14.826836   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:49:14.826947   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:49:14.836119   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:49:14.858330   48243 start.go:296] duration metric: took 128.416201ms for postStartSetup
	I0723 14:49:14.858394   48243 fix.go:56] duration metric: took 1m31.398769382s for fixHost
	I0723 14:49:14.858423   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.860947   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.861259   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.861287   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.861400   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.861692   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.861846   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.861959   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.862127   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:49:14.862286   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:49:14.862310   48243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:49:14.970781   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721746154.942689956
	
	I0723 14:49:14.970805   48243 fix.go:216] guest clock: 1721746154.942689956
	I0723 14:49:14.970815   48243 fix.go:229] Guest: 2024-07-23 14:49:14.942689956 +0000 UTC Remote: 2024-07-23 14:49:14.858400853 +0000 UTC m=+91.523531233 (delta=84.289103ms)
	I0723 14:49:14.970847   48243 fix.go:200] guest clock delta is within tolerance: 84.289103ms
	I0723 14:49:14.970854   48243 start.go:83] releasing machines lock for "multinode-574866", held for 1m31.511247967s
	I0723 14:49:14.970876   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.971158   48243 main.go:141] libmachine: (multinode-574866) Calling .GetIP
	I0723 14:49:14.973903   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.974291   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.974307   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.974551   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.974947   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.975167   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.975301   48243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:49:14.975356   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.975405   48243 ssh_runner.go:195] Run: cat /version.json
	I0723 14:49:14.975427   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.977898   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.977931   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.978225   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.978266   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.978295   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.978309   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.978397   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.978569   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.978638   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.978725   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.978766   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.978990   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.978999   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:49:14.979140   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:49:15.055241   48243 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0723 14:49:15.055439   48243 ssh_runner.go:195] Run: systemctl --version
	I0723 14:49:15.088002   48243 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0723 14:49:15.088713   48243 command_runner.go:130] > systemd 252 (252)
	I0723 14:49:15.088748   48243 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0723 14:49:15.088816   48243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:49:15.246660   48243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0723 14:49:15.253505   48243 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0723 14:49:15.253656   48243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:49:15.253717   48243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:49:15.262251   48243 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 14:49:15.262270   48243 start.go:495] detecting cgroup driver to use...
	I0723 14:49:15.262331   48243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:49:15.277707   48243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:49:15.291470   48243 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:49:15.291529   48243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:49:15.304655   48243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:49:15.317702   48243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:49:15.459715   48243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:49:15.596113   48243 docker.go:233] disabling docker service ...
	I0723 14:49:15.596199   48243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:49:15.612087   48243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:49:15.625351   48243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:49:15.762471   48243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:49:15.897395   48243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:49:15.910475   48243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:49:15.928012   48243 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0723 14:49:15.928508   48243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:49:15.928569   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.938850   48243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:49:15.938924   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.949438   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.959597   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.970042   48243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:49:15.980040   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.989668   48243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:16.000156   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:16.010036   48243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:49:16.018492   48243 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0723 14:49:16.018858   48243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:49:16.027808   48243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:49:16.166504   48243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:49:16.706143   48243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:49:16.706219   48243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:49:16.710639   48243 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0723 14:49:16.710658   48243 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0723 14:49:16.710676   48243 command_runner.go:130] > Device: 0,22	Inode: 1342        Links: 1
	I0723 14:49:16.710682   48243 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0723 14:49:16.710687   48243 command_runner.go:130] > Access: 2024-07-23 14:49:16.574015398 +0000
	I0723 14:49:16.710697   48243 command_runner.go:130] > Modify: 2024-07-23 14:49:16.574015398 +0000
	I0723 14:49:16.710705   48243 command_runner.go:130] > Change: 2024-07-23 14:49:16.574015398 +0000
	I0723 14:49:16.710710   48243 command_runner.go:130] >  Birth: -
	I0723 14:49:16.710748   48243 start.go:563] Will wait 60s for crictl version
	I0723 14:49:16.710785   48243 ssh_runner.go:195] Run: which crictl
	I0723 14:49:16.714000   48243 command_runner.go:130] > /usr/bin/crictl
	I0723 14:49:16.714046   48243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:49:16.749009   48243 command_runner.go:130] > Version:  0.1.0
	I0723 14:49:16.749033   48243 command_runner.go:130] > RuntimeName:  cri-o
	I0723 14:49:16.749040   48243 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0723 14:49:16.749048   48243 command_runner.go:130] > RuntimeApiVersion:  v1
	I0723 14:49:16.749075   48243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:49:16.749156   48243 ssh_runner.go:195] Run: crio --version
	I0723 14:49:16.775577   48243 command_runner.go:130] > crio version 1.29.1
	I0723 14:49:16.775600   48243 command_runner.go:130] > Version:        1.29.1
	I0723 14:49:16.775605   48243 command_runner.go:130] > GitCommit:      unknown
	I0723 14:49:16.775609   48243 command_runner.go:130] > GitCommitDate:  unknown
	I0723 14:49:16.775613   48243 command_runner.go:130] > GitTreeState:   clean
	I0723 14:49:16.775619   48243 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0723 14:49:16.775622   48243 command_runner.go:130] > GoVersion:      go1.21.6
	I0723 14:49:16.775627   48243 command_runner.go:130] > Compiler:       gc
	I0723 14:49:16.775631   48243 command_runner.go:130] > Platform:       linux/amd64
	I0723 14:49:16.775634   48243 command_runner.go:130] > Linkmode:       dynamic
	I0723 14:49:16.775638   48243 command_runner.go:130] > BuildTags:      
	I0723 14:49:16.775642   48243 command_runner.go:130] >   containers_image_ostree_stub
	I0723 14:49:16.775645   48243 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0723 14:49:16.775649   48243 command_runner.go:130] >   btrfs_noversion
	I0723 14:49:16.775655   48243 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0723 14:49:16.775661   48243 command_runner.go:130] >   libdm_no_deferred_remove
	I0723 14:49:16.775666   48243 command_runner.go:130] >   seccomp
	I0723 14:49:16.775672   48243 command_runner.go:130] > LDFlags:          unknown
	I0723 14:49:16.775678   48243 command_runner.go:130] > SeccompEnabled:   true
	I0723 14:49:16.775705   48243 command_runner.go:130] > AppArmorEnabled:  false
	I0723 14:49:16.776770   48243 ssh_runner.go:195] Run: crio --version
	I0723 14:49:16.803817   48243 command_runner.go:130] > crio version 1.29.1
	I0723 14:49:16.803839   48243 command_runner.go:130] > Version:        1.29.1
	I0723 14:49:16.803846   48243 command_runner.go:130] > GitCommit:      unknown
	I0723 14:49:16.803850   48243 command_runner.go:130] > GitCommitDate:  unknown
	I0723 14:49:16.803854   48243 command_runner.go:130] > GitTreeState:   clean
	I0723 14:49:16.803860   48243 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0723 14:49:16.803864   48243 command_runner.go:130] > GoVersion:      go1.21.6
	I0723 14:49:16.803868   48243 command_runner.go:130] > Compiler:       gc
	I0723 14:49:16.803874   48243 command_runner.go:130] > Platform:       linux/amd64
	I0723 14:49:16.803878   48243 command_runner.go:130] > Linkmode:       dynamic
	I0723 14:49:16.803881   48243 command_runner.go:130] > BuildTags:      
	I0723 14:49:16.803886   48243 command_runner.go:130] >   containers_image_ostree_stub
	I0723 14:49:16.803889   48243 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0723 14:49:16.803893   48243 command_runner.go:130] >   btrfs_noversion
	I0723 14:49:16.803898   48243 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0723 14:49:16.803902   48243 command_runner.go:130] >   libdm_no_deferred_remove
	I0723 14:49:16.803906   48243 command_runner.go:130] >   seccomp
	I0723 14:49:16.803910   48243 command_runner.go:130] > LDFlags:          unknown
	I0723 14:49:16.803917   48243 command_runner.go:130] > SeccompEnabled:   true
	I0723 14:49:16.803922   48243 command_runner.go:130] > AppArmorEnabled:  false
	I0723 14:49:16.807103   48243 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:49:16.808430   48243 main.go:141] libmachine: (multinode-574866) Calling .GetIP
	I0723 14:49:16.811403   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:16.811787   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:16.811808   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:16.812075   48243 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:49:16.816109   48243 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0723 14:49:16.816193   48243 kubeadm.go:883] updating cluster {Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:49:16.816368   48243 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:49:16.816414   48243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:49:16.859703   48243 command_runner.go:130] > {
	I0723 14:49:16.859725   48243 command_runner.go:130] >   "images": [
	I0723 14:49:16.859729   48243 command_runner.go:130] >     {
	I0723 14:49:16.859736   48243 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0723 14:49:16.859741   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859747   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0723 14:49:16.859751   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859755   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859763   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0723 14:49:16.859770   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0723 14:49:16.859775   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859779   48243 command_runner.go:130] >       "size": "87165492",
	I0723 14:49:16.859783   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.859787   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.859792   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.859797   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.859800   48243 command_runner.go:130] >     },
	I0723 14:49:16.859804   48243 command_runner.go:130] >     {
	I0723 14:49:16.859811   48243 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0723 14:49:16.859816   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859824   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0723 14:49:16.859828   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859833   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859840   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0723 14:49:16.859849   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0723 14:49:16.859852   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859856   48243 command_runner.go:130] >       "size": "87174707",
	I0723 14:49:16.859861   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.859869   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.859873   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.859877   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.859880   48243 command_runner.go:130] >     },
	I0723 14:49:16.859884   48243 command_runner.go:130] >     {
	I0723 14:49:16.859889   48243 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0723 14:49:16.859894   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859903   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0723 14:49:16.859909   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859913   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859919   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0723 14:49:16.859929   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0723 14:49:16.859934   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859938   48243 command_runner.go:130] >       "size": "1363676",
	I0723 14:49:16.859942   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.859948   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.859953   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.859957   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.859960   48243 command_runner.go:130] >     },
	I0723 14:49:16.859963   48243 command_runner.go:130] >     {
	I0723 14:49:16.859969   48243 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0723 14:49:16.859974   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859979   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0723 14:49:16.859984   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859988   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859995   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0723 14:49:16.860009   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0723 14:49:16.860014   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860019   48243 command_runner.go:130] >       "size": "31470524",
	I0723 14:49:16.860022   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.860026   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860032   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860036   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860040   48243 command_runner.go:130] >     },
	I0723 14:49:16.860043   48243 command_runner.go:130] >     {
	I0723 14:49:16.860050   48243 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0723 14:49:16.860055   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860061   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0723 14:49:16.860067   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860070   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860084   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0723 14:49:16.860094   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0723 14:49:16.860098   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860106   48243 command_runner.go:130] >       "size": "61245718",
	I0723 14:49:16.860117   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.860124   48243 command_runner.go:130] >       "username": "nonroot",
	I0723 14:49:16.860133   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860139   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860145   48243 command_runner.go:130] >     },
	I0723 14:49:16.860151   48243 command_runner.go:130] >     {
	I0723 14:49:16.860157   48243 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0723 14:49:16.860162   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860166   48243 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0723 14:49:16.860171   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860175   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860184   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0723 14:49:16.860190   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0723 14:49:16.860196   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860200   48243 command_runner.go:130] >       "size": "150779692",
	I0723 14:49:16.860205   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860209   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860215   48243 command_runner.go:130] >       },
	I0723 14:49:16.860219   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860225   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860233   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860238   48243 command_runner.go:130] >     },
	I0723 14:49:16.860246   48243 command_runner.go:130] >     {
	I0723 14:49:16.860256   48243 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0723 14:49:16.860265   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860272   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0723 14:49:16.860279   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860285   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860299   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0723 14:49:16.860316   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0723 14:49:16.860322   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860327   48243 command_runner.go:130] >       "size": "117609954",
	I0723 14:49:16.860331   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860341   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860347   48243 command_runner.go:130] >       },
	I0723 14:49:16.860355   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860362   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860366   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860369   48243 command_runner.go:130] >     },
	I0723 14:49:16.860372   48243 command_runner.go:130] >     {
	I0723 14:49:16.860378   48243 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0723 14:49:16.860384   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860389   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0723 14:49:16.860393   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860397   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860419   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0723 14:49:16.860430   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0723 14:49:16.860433   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860436   48243 command_runner.go:130] >       "size": "112198984",
	I0723 14:49:16.860440   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860446   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860450   48243 command_runner.go:130] >       },
	I0723 14:49:16.860454   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860457   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860461   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860464   48243 command_runner.go:130] >     },
	I0723 14:49:16.860467   48243 command_runner.go:130] >     {
	I0723 14:49:16.860473   48243 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0723 14:49:16.860477   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860481   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0723 14:49:16.860484   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860488   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860495   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0723 14:49:16.860501   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0723 14:49:16.860504   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860513   48243 command_runner.go:130] >       "size": "85953945",
	I0723 14:49:16.860516   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.860520   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860523   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860526   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860529   48243 command_runner.go:130] >     },
	I0723 14:49:16.860536   48243 command_runner.go:130] >     {
	I0723 14:49:16.860542   48243 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0723 14:49:16.860545   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860559   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0723 14:49:16.860562   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860566   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860573   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0723 14:49:16.860579   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0723 14:49:16.860582   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860586   48243 command_runner.go:130] >       "size": "63051080",
	I0723 14:49:16.860589   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860592   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860595   48243 command_runner.go:130] >       },
	I0723 14:49:16.860598   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860602   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860607   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860610   48243 command_runner.go:130] >     },
	I0723 14:49:16.860616   48243 command_runner.go:130] >     {
	I0723 14:49:16.860622   48243 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0723 14:49:16.860627   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860632   48243 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0723 14:49:16.860637   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860640   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860647   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0723 14:49:16.860655   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0723 14:49:16.860659   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860665   48243 command_runner.go:130] >       "size": "750414",
	I0723 14:49:16.860669   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860675   48243 command_runner.go:130] >         "value": "65535"
	I0723 14:49:16.860678   48243 command_runner.go:130] >       },
	I0723 14:49:16.860681   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860685   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860691   48243 command_runner.go:130] >       "pinned": true
	I0723 14:49:16.860694   48243 command_runner.go:130] >     }
	I0723 14:49:16.860698   48243 command_runner.go:130] >   ]
	I0723 14:49:16.860700   48243 command_runner.go:130] > }
	I0723 14:49:16.860869   48243 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:49:16.860879   48243 crio.go:433] Images already preloaded, skipping extraction
	I0723 14:49:16.860931   48243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:49:16.892664   48243 command_runner.go:130] > {
	I0723 14:49:16.892689   48243 command_runner.go:130] >   "images": [
	I0723 14:49:16.892695   48243 command_runner.go:130] >     {
	I0723 14:49:16.892708   48243 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0723 14:49:16.892714   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.892724   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0723 14:49:16.892729   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892735   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.892751   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0723 14:49:16.892761   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0723 14:49:16.892766   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892773   48243 command_runner.go:130] >       "size": "87165492",
	I0723 14:49:16.892784   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.892791   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.892801   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.892808   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.892816   48243 command_runner.go:130] >     },
	I0723 14:49:16.892821   48243 command_runner.go:130] >     {
	I0723 14:49:16.892828   48243 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0723 14:49:16.892835   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.892839   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0723 14:49:16.892847   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892853   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.892869   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0723 14:49:16.892885   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0723 14:49:16.892893   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892901   48243 command_runner.go:130] >       "size": "87174707",
	I0723 14:49:16.892910   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.892922   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.892930   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.892935   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.892939   48243 command_runner.go:130] >     },
	I0723 14:49:16.892953   48243 command_runner.go:130] >     {
	I0723 14:49:16.892967   48243 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0723 14:49:16.892977   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.892988   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0723 14:49:16.892997   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893006   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893020   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0723 14:49:16.893032   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0723 14:49:16.893041   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893051   48243 command_runner.go:130] >       "size": "1363676",
	I0723 14:49:16.893060   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893067   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893077   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893087   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893095   48243 command_runner.go:130] >     },
	I0723 14:49:16.893103   48243 command_runner.go:130] >     {
	I0723 14:49:16.893116   48243 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0723 14:49:16.893124   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893133   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0723 14:49:16.893139   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893146   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893161   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0723 14:49:16.893184   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0723 14:49:16.893193   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893200   48243 command_runner.go:130] >       "size": "31470524",
	I0723 14:49:16.893206   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893215   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893223   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893233   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893240   48243 command_runner.go:130] >     },
	I0723 14:49:16.893246   48243 command_runner.go:130] >     {
	I0723 14:49:16.893258   48243 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0723 14:49:16.893266   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893274   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0723 14:49:16.893283   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893289   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893310   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0723 14:49:16.893324   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0723 14:49:16.893333   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893342   48243 command_runner.go:130] >       "size": "61245718",
	I0723 14:49:16.893351   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893362   48243 command_runner.go:130] >       "username": "nonroot",
	I0723 14:49:16.893367   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893370   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893374   48243 command_runner.go:130] >     },
	I0723 14:49:16.893377   48243 command_runner.go:130] >     {
	I0723 14:49:16.893383   48243 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0723 14:49:16.893389   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893394   48243 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0723 14:49:16.893398   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893402   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893410   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0723 14:49:16.893419   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0723 14:49:16.893424   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893428   48243 command_runner.go:130] >       "size": "150779692",
	I0723 14:49:16.893434   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893438   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893442   48243 command_runner.go:130] >       },
	I0723 14:49:16.893446   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893452   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893455   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893459   48243 command_runner.go:130] >     },
	I0723 14:49:16.893462   48243 command_runner.go:130] >     {
	I0723 14:49:16.893469   48243 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0723 14:49:16.893473   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893478   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0723 14:49:16.893483   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893487   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893494   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0723 14:49:16.893503   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0723 14:49:16.893508   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893512   48243 command_runner.go:130] >       "size": "117609954",
	I0723 14:49:16.893523   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893529   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893532   48243 command_runner.go:130] >       },
	I0723 14:49:16.893538   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893542   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893553   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893558   48243 command_runner.go:130] >     },
	I0723 14:49:16.893561   48243 command_runner.go:130] >     {
	I0723 14:49:16.893567   48243 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0723 14:49:16.893572   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893577   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0723 14:49:16.893583   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893587   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893616   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0723 14:49:16.893626   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0723 14:49:16.893629   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893633   48243 command_runner.go:130] >       "size": "112198984",
	I0723 14:49:16.893637   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893641   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893646   48243 command_runner.go:130] >       },
	I0723 14:49:16.893650   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893654   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893657   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893660   48243 command_runner.go:130] >     },
	I0723 14:49:16.893664   48243 command_runner.go:130] >     {
	I0723 14:49:16.893669   48243 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0723 14:49:16.893676   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893681   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0723 14:49:16.893686   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893690   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893697   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0723 14:49:16.893704   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0723 14:49:16.893707   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893710   48243 command_runner.go:130] >       "size": "85953945",
	I0723 14:49:16.893714   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893718   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893725   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893729   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893732   48243 command_runner.go:130] >     },
	I0723 14:49:16.893735   48243 command_runner.go:130] >     {
	I0723 14:49:16.893740   48243 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0723 14:49:16.893744   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893748   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0723 14:49:16.893751   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893755   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893761   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0723 14:49:16.893767   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0723 14:49:16.893771   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893774   48243 command_runner.go:130] >       "size": "63051080",
	I0723 14:49:16.893777   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893781   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893784   48243 command_runner.go:130] >       },
	I0723 14:49:16.893788   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893791   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893795   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893798   48243 command_runner.go:130] >     },
	I0723 14:49:16.893801   48243 command_runner.go:130] >     {
	I0723 14:49:16.893807   48243 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0723 14:49:16.893810   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893815   48243 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0723 14:49:16.893818   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893822   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893829   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0723 14:49:16.893839   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0723 14:49:16.893844   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893848   48243 command_runner.go:130] >       "size": "750414",
	I0723 14:49:16.893851   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893855   48243 command_runner.go:130] >         "value": "65535"
	I0723 14:49:16.893858   48243 command_runner.go:130] >       },
	I0723 14:49:16.893862   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893869   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893872   48243 command_runner.go:130] >       "pinned": true
	I0723 14:49:16.893880   48243 command_runner.go:130] >     }
	I0723 14:49:16.893885   48243 command_runner.go:130] >   ]
	I0723 14:49:16.893888   48243 command_runner.go:130] > }
	I0723 14:49:16.893996   48243 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:49:16.894007   48243 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:49:16.894014   48243 kubeadm.go:934] updating node { 192.168.39.146 8443 v1.30.3 crio true true} ...
	I0723 14:49:16.894114   48243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-574866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:49:16.894181   48243 ssh_runner.go:195] Run: crio config
	I0723 14:49:16.936698   48243 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0723 14:49:16.936727   48243 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0723 14:49:16.936735   48243 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0723 14:49:16.936739   48243 command_runner.go:130] > #
	I0723 14:49:16.936749   48243 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0723 14:49:16.936758   48243 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0723 14:49:16.936766   48243 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0723 14:49:16.936778   48243 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0723 14:49:16.936784   48243 command_runner.go:130] > # reload'.
	I0723 14:49:16.936792   48243 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0723 14:49:16.936802   48243 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0723 14:49:16.936812   48243 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0723 14:49:16.936828   48243 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0723 14:49:16.936834   48243 command_runner.go:130] > [crio]
	I0723 14:49:16.936845   48243 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0723 14:49:16.936855   48243 command_runner.go:130] > # containers images, in this directory.
	I0723 14:49:16.936863   48243 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0723 14:49:16.936878   48243 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0723 14:49:16.936889   48243 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0723 14:49:16.936903   48243 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0723 14:49:16.936912   48243 command_runner.go:130] > # imagestore = ""
	I0723 14:49:16.936926   48243 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0723 14:49:16.936939   48243 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0723 14:49:16.936963   48243 command_runner.go:130] > storage_driver = "overlay"
	I0723 14:49:16.936975   48243 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0723 14:49:16.936984   48243 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0723 14:49:16.936994   48243 command_runner.go:130] > storage_option = [
	I0723 14:49:16.937009   48243 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0723 14:49:16.937017   48243 command_runner.go:130] > ]
	I0723 14:49:16.937028   48243 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0723 14:49:16.937040   48243 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0723 14:49:16.937051   48243 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0723 14:49:16.937064   48243 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0723 14:49:16.937077   48243 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0723 14:49:16.937087   48243 command_runner.go:130] > # always happen on a node reboot
	I0723 14:49:16.937098   48243 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0723 14:49:16.937119   48243 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0723 14:49:16.937132   48243 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0723 14:49:16.937143   48243 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0723 14:49:16.937152   48243 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0723 14:49:16.937166   48243 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0723 14:49:16.937181   48243 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0723 14:49:16.937191   48243 command_runner.go:130] > # internal_wipe = true
	I0723 14:49:16.937205   48243 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0723 14:49:16.937217   48243 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0723 14:49:16.937227   48243 command_runner.go:130] > # internal_repair = false
	I0723 14:49:16.937239   48243 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0723 14:49:16.937250   48243 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0723 14:49:16.937263   48243 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0723 14:49:16.937275   48243 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0723 14:49:16.937287   48243 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0723 14:49:16.937296   48243 command_runner.go:130] > [crio.api]
	I0723 14:49:16.937305   48243 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0723 14:49:16.937315   48243 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0723 14:49:16.937338   48243 command_runner.go:130] > # IP address on which the stream server will listen.
	I0723 14:49:16.937348   48243 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0723 14:49:16.937359   48243 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0723 14:49:16.937370   48243 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0723 14:49:16.937379   48243 command_runner.go:130] > # stream_port = "0"
	I0723 14:49:16.937396   48243 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0723 14:49:16.937405   48243 command_runner.go:130] > # stream_enable_tls = false
	I0723 14:49:16.937416   48243 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0723 14:49:16.937425   48243 command_runner.go:130] > # stream_idle_timeout = ""
	I0723 14:49:16.937438   48243 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0723 14:49:16.937449   48243 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0723 14:49:16.937454   48243 command_runner.go:130] > # minutes.
	I0723 14:49:16.937459   48243 command_runner.go:130] > # stream_tls_cert = ""
	I0723 14:49:16.937467   48243 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0723 14:49:16.937475   48243 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0723 14:49:16.937481   48243 command_runner.go:130] > # stream_tls_key = ""
	I0723 14:49:16.937489   48243 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0723 14:49:16.937498   48243 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0723 14:49:16.937532   48243 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0723 14:49:16.937543   48243 command_runner.go:130] > # stream_tls_ca = ""
	I0723 14:49:16.937558   48243 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0723 14:49:16.937568   48243 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0723 14:49:16.937579   48243 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0723 14:49:16.937587   48243 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0723 14:49:16.937600   48243 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0723 14:49:16.937620   48243 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0723 14:49:16.937629   48243 command_runner.go:130] > [crio.runtime]
	I0723 14:49:16.937640   48243 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0723 14:49:16.937652   48243 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0723 14:49:16.937661   48243 command_runner.go:130] > # "nofile=1024:2048"
	I0723 14:49:16.937671   48243 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0723 14:49:16.937680   48243 command_runner.go:130] > # default_ulimits = [
	I0723 14:49:16.937688   48243 command_runner.go:130] > # ]
	I0723 14:49:16.937699   48243 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0723 14:49:16.937707   48243 command_runner.go:130] > # no_pivot = false
	I0723 14:49:16.937717   48243 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0723 14:49:16.937730   48243 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0723 14:49:16.937741   48243 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0723 14:49:16.937752   48243 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0723 14:49:16.937761   48243 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0723 14:49:16.937776   48243 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0723 14:49:16.937792   48243 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0723 14:49:16.937802   48243 command_runner.go:130] > # Cgroup setting for conmon
	I0723 14:49:16.937816   48243 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0723 14:49:16.937825   48243 command_runner.go:130] > conmon_cgroup = "pod"
	I0723 14:49:16.937838   48243 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0723 14:49:16.937849   48243 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0723 14:49:16.937859   48243 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0723 14:49:16.937864   48243 command_runner.go:130] > conmon_env = [
	I0723 14:49:16.937872   48243 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0723 14:49:16.937877   48243 command_runner.go:130] > ]
	I0723 14:49:16.937885   48243 command_runner.go:130] > # Additional environment variables to set for all the
	I0723 14:49:16.937894   48243 command_runner.go:130] > # containers. These are overridden if set in the
	I0723 14:49:16.937903   48243 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0723 14:49:16.937912   48243 command_runner.go:130] > # default_env = [
	I0723 14:49:16.937917   48243 command_runner.go:130] > # ]
	I0723 14:49:16.937929   48243 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0723 14:49:16.937940   48243 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0723 14:49:16.937947   48243 command_runner.go:130] > # selinux = false
	I0723 14:49:16.937957   48243 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0723 14:49:16.937967   48243 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0723 14:49:16.937976   48243 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0723 14:49:16.937985   48243 command_runner.go:130] > # seccomp_profile = ""
	I0723 14:49:16.937993   48243 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0723 14:49:16.938004   48243 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0723 14:49:16.938014   48243 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0723 14:49:16.938024   48243 command_runner.go:130] > # which might increase security.
	I0723 14:49:16.938031   48243 command_runner.go:130] > # This option is currently deprecated,
	I0723 14:49:16.938042   48243 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0723 14:49:16.938046   48243 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0723 14:49:16.938054   48243 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0723 14:49:16.938060   48243 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0723 14:49:16.938068   48243 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0723 14:49:16.938074   48243 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0723 14:49:16.938081   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.938085   48243 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0723 14:49:16.938091   48243 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0723 14:49:16.938105   48243 command_runner.go:130] > # the cgroup blockio controller.
	I0723 14:49:16.938115   48243 command_runner.go:130] > # blockio_config_file = ""
	I0723 14:49:16.938127   48243 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0723 14:49:16.938136   48243 command_runner.go:130] > # blockio parameters.
	I0723 14:49:16.938143   48243 command_runner.go:130] > # blockio_reload = false
	I0723 14:49:16.938153   48243 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0723 14:49:16.938162   48243 command_runner.go:130] > # irqbalance daemon.
	I0723 14:49:16.938170   48243 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0723 14:49:16.938182   48243 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0723 14:49:16.938195   48243 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0723 14:49:16.938208   48243 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0723 14:49:16.938220   48243 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0723 14:49:16.938233   48243 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0723 14:49:16.938242   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.938247   48243 command_runner.go:130] > # rdt_config_file = ""
	I0723 14:49:16.938251   48243 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0723 14:49:16.938257   48243 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0723 14:49:16.938327   48243 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0723 14:49:16.938342   48243 command_runner.go:130] > # separate_pull_cgroup = ""
	I0723 14:49:16.938351   48243 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0723 14:49:16.938360   48243 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0723 14:49:16.938370   48243 command_runner.go:130] > # will be added.
	I0723 14:49:16.938391   48243 command_runner.go:130] > # default_capabilities = [
	I0723 14:49:16.938400   48243 command_runner.go:130] > # 	"CHOWN",
	I0723 14:49:16.938407   48243 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0723 14:49:16.938415   48243 command_runner.go:130] > # 	"FSETID",
	I0723 14:49:16.938421   48243 command_runner.go:130] > # 	"FOWNER",
	I0723 14:49:16.938430   48243 command_runner.go:130] > # 	"SETGID",
	I0723 14:49:16.938437   48243 command_runner.go:130] > # 	"SETUID",
	I0723 14:49:16.938443   48243 command_runner.go:130] > # 	"SETPCAP",
	I0723 14:49:16.938449   48243 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0723 14:49:16.938455   48243 command_runner.go:130] > # 	"KILL",
	I0723 14:49:16.938460   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938474   48243 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0723 14:49:16.938487   48243 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0723 14:49:16.938497   48243 command_runner.go:130] > # add_inheritable_capabilities = false
	I0723 14:49:16.938515   48243 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0723 14:49:16.938524   48243 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0723 14:49:16.938528   48243 command_runner.go:130] > default_sysctls = [
	I0723 14:49:16.938533   48243 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0723 14:49:16.938539   48243 command_runner.go:130] > ]
	I0723 14:49:16.938543   48243 command_runner.go:130] > # List of devices on the host that a
	I0723 14:49:16.938549   48243 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0723 14:49:16.938554   48243 command_runner.go:130] > # allowed_devices = [
	I0723 14:49:16.938559   48243 command_runner.go:130] > # 	"/dev/fuse",
	I0723 14:49:16.938562   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938567   48243 command_runner.go:130] > # List of additional devices. specified as
	I0723 14:49:16.938575   48243 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0723 14:49:16.938582   48243 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0723 14:49:16.938587   48243 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0723 14:49:16.938594   48243 command_runner.go:130] > # additional_devices = [
	I0723 14:49:16.938597   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938602   48243 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0723 14:49:16.938608   48243 command_runner.go:130] > # cdi_spec_dirs = [
	I0723 14:49:16.938612   48243 command_runner.go:130] > # 	"/etc/cdi",
	I0723 14:49:16.938617   48243 command_runner.go:130] > # 	"/var/run/cdi",
	I0723 14:49:16.938621   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938632   48243 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0723 14:49:16.938637   48243 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0723 14:49:16.938643   48243 command_runner.go:130] > # Defaults to false.
	I0723 14:49:16.938650   48243 command_runner.go:130] > # device_ownership_from_security_context = false
	I0723 14:49:16.938662   48243 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0723 14:49:16.938675   48243 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0723 14:49:16.938684   48243 command_runner.go:130] > # hooks_dir = [
	I0723 14:49:16.938698   48243 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0723 14:49:16.938707   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938717   48243 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0723 14:49:16.938733   48243 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0723 14:49:16.938743   48243 command_runner.go:130] > # its default mounts from the following two files:
	I0723 14:49:16.938750   48243 command_runner.go:130] > #
	I0723 14:49:16.938760   48243 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0723 14:49:16.938773   48243 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0723 14:49:16.938790   48243 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0723 14:49:16.938796   48243 command_runner.go:130] > #
	I0723 14:49:16.938801   48243 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0723 14:49:16.938809   48243 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0723 14:49:16.938815   48243 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0723 14:49:16.938822   48243 command_runner.go:130] > #      only add mounts it finds in this file.
	I0723 14:49:16.938825   48243 command_runner.go:130] > #
	I0723 14:49:16.938829   48243 command_runner.go:130] > # default_mounts_file = ""
	I0723 14:49:16.938836   48243 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0723 14:49:16.938848   48243 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0723 14:49:16.938859   48243 command_runner.go:130] > pids_limit = 1024
	I0723 14:49:16.938872   48243 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0723 14:49:16.938884   48243 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0723 14:49:16.938898   48243 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0723 14:49:16.938912   48243 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0723 14:49:16.938922   48243 command_runner.go:130] > # log_size_max = -1
	I0723 14:49:16.938932   48243 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0723 14:49:16.938941   48243 command_runner.go:130] > # log_to_journald = false
	I0723 14:49:16.938950   48243 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0723 14:49:16.938960   48243 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0723 14:49:16.938972   48243 command_runner.go:130] > # Path to directory for container attach sockets.
	I0723 14:49:16.938978   48243 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0723 14:49:16.938983   48243 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0723 14:49:16.938989   48243 command_runner.go:130] > # bind_mount_prefix = ""
	I0723 14:49:16.938994   48243 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0723 14:49:16.939000   48243 command_runner.go:130] > # read_only = false
	I0723 14:49:16.939008   48243 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0723 14:49:16.939020   48243 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0723 14:49:16.939030   48243 command_runner.go:130] > # live configuration reload.
	I0723 14:49:16.939037   48243 command_runner.go:130] > # log_level = "info"
	I0723 14:49:16.939046   48243 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0723 14:49:16.939057   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.939065   48243 command_runner.go:130] > # log_filter = ""
	I0723 14:49:16.939078   48243 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0723 14:49:16.939093   48243 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0723 14:49:16.939103   48243 command_runner.go:130] > # separated by comma.
	I0723 14:49:16.939120   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939130   48243 command_runner.go:130] > # uid_mappings = ""
	I0723 14:49:16.939139   48243 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0723 14:49:16.939150   48243 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0723 14:49:16.939159   48243 command_runner.go:130] > # separated by comma.
	I0723 14:49:16.939174   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939183   48243 command_runner.go:130] > # gid_mappings = ""
	I0723 14:49:16.939192   48243 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0723 14:49:16.939205   48243 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0723 14:49:16.939218   48243 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0723 14:49:16.939232   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939241   48243 command_runner.go:130] > # minimum_mappable_uid = -1
	I0723 14:49:16.939248   48243 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0723 14:49:16.939258   48243 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0723 14:49:16.939269   48243 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0723 14:49:16.939284   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939294   48243 command_runner.go:130] > # minimum_mappable_gid = -1
	I0723 14:49:16.939304   48243 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0723 14:49:16.939316   48243 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0723 14:49:16.939331   48243 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0723 14:49:16.939338   48243 command_runner.go:130] > # ctr_stop_timeout = 30
	I0723 14:49:16.939346   48243 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0723 14:49:16.939358   48243 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0723 14:49:16.939369   48243 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0723 14:49:16.939379   48243 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0723 14:49:16.939389   48243 command_runner.go:130] > drop_infra_ctr = false
	I0723 14:49:16.939398   48243 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0723 14:49:16.939409   48243 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0723 14:49:16.939420   48243 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0723 14:49:16.939426   48243 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0723 14:49:16.939436   48243 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0723 14:49:16.939449   48243 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0723 14:49:16.939460   48243 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0723 14:49:16.939471   48243 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0723 14:49:16.939480   48243 command_runner.go:130] > # shared_cpuset = ""
	I0723 14:49:16.939490   48243 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0723 14:49:16.939506   48243 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0723 14:49:16.939513   48243 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0723 14:49:16.939524   48243 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0723 14:49:16.939535   48243 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0723 14:49:16.939550   48243 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0723 14:49:16.939563   48243 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0723 14:49:16.939573   48243 command_runner.go:130] > # enable_criu_support = false
	I0723 14:49:16.939583   48243 command_runner.go:130] > # Enable/disable the generation of the container,
	I0723 14:49:16.939593   48243 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0723 14:49:16.939601   48243 command_runner.go:130] > # enable_pod_events = false
	I0723 14:49:16.939613   48243 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0723 14:49:16.939626   48243 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0723 14:49:16.939637   48243 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0723 14:49:16.939646   48243 command_runner.go:130] > # default_runtime = "runc"
	I0723 14:49:16.939657   48243 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0723 14:49:16.939671   48243 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0723 14:49:16.939683   48243 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0723 14:49:16.939693   48243 command_runner.go:130] > # creation as a file is not desired either.
	I0723 14:49:16.939709   48243 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0723 14:49:16.939719   48243 command_runner.go:130] > # the hostname is being managed dynamically.
	I0723 14:49:16.939729   48243 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0723 14:49:16.939737   48243 command_runner.go:130] > # ]
	I0723 14:49:16.939750   48243 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0723 14:49:16.939760   48243 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0723 14:49:16.939768   48243 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0723 14:49:16.939777   48243 command_runner.go:130] > # Each entry in the table should follow the format:
	I0723 14:49:16.939785   48243 command_runner.go:130] > #
	I0723 14:49:16.939793   48243 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0723 14:49:16.939804   48243 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0723 14:49:16.939860   48243 command_runner.go:130] > # runtime_type = "oci"
	I0723 14:49:16.939870   48243 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0723 14:49:16.939879   48243 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0723 14:49:16.939888   48243 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0723 14:49:16.939896   48243 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0723 14:49:16.939904   48243 command_runner.go:130] > # monitor_env = []
	I0723 14:49:16.939912   48243 command_runner.go:130] > # privileged_without_host_devices = false
	I0723 14:49:16.939927   48243 command_runner.go:130] > # allowed_annotations = []
	I0723 14:49:16.939935   48243 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0723 14:49:16.939939   48243 command_runner.go:130] > # Where:
	I0723 14:49:16.939946   48243 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0723 14:49:16.939959   48243 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0723 14:49:16.939972   48243 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0723 14:49:16.939984   48243 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0723 14:49:16.939990   48243 command_runner.go:130] > #   in $PATH.
	I0723 14:49:16.940003   48243 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0723 14:49:16.940013   48243 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0723 14:49:16.940019   48243 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0723 14:49:16.940025   48243 command_runner.go:130] > #   state.
	I0723 14:49:16.940035   48243 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0723 14:49:16.940047   48243 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0723 14:49:16.940060   48243 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0723 14:49:16.940071   48243 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0723 14:49:16.940081   48243 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0723 14:49:16.940094   48243 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0723 14:49:16.940102   48243 command_runner.go:130] > #   The currently recognized values are:
	I0723 14:49:16.940109   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0723 14:49:16.940128   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0723 14:49:16.940140   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0723 14:49:16.940152   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0723 14:49:16.940166   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0723 14:49:16.940178   48243 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0723 14:49:16.940189   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0723 14:49:16.940200   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0723 14:49:16.940211   48243 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0723 14:49:16.940224   48243 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0723 14:49:16.940234   48243 command_runner.go:130] > #   deprecated option "conmon".
	I0723 14:49:16.940246   48243 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0723 14:49:16.940257   48243 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0723 14:49:16.940270   48243 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0723 14:49:16.940276   48243 command_runner.go:130] > #   should be moved to the container's cgroup
	I0723 14:49:16.940287   48243 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0723 14:49:16.940297   48243 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0723 14:49:16.940316   48243 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0723 14:49:16.940331   48243 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0723 14:49:16.940339   48243 command_runner.go:130] > #
	I0723 14:49:16.940349   48243 command_runner.go:130] > # Using the seccomp notifier feature:
	I0723 14:49:16.940356   48243 command_runner.go:130] > #
	I0723 14:49:16.940362   48243 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0723 14:49:16.940373   48243 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0723 14:49:16.940381   48243 command_runner.go:130] > #
	I0723 14:49:16.940391   48243 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0723 14:49:16.940408   48243 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0723 14:49:16.940416   48243 command_runner.go:130] > #
	I0723 14:49:16.940425   48243 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0723 14:49:16.940434   48243 command_runner.go:130] > # feature.
	I0723 14:49:16.940438   48243 command_runner.go:130] > #
	I0723 14:49:16.940448   48243 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0723 14:49:16.940458   48243 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0723 14:49:16.940471   48243 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0723 14:49:16.940483   48243 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0723 14:49:16.940495   48243 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0723 14:49:16.940503   48243 command_runner.go:130] > #
	I0723 14:49:16.940512   48243 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0723 14:49:16.940524   48243 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0723 14:49:16.940530   48243 command_runner.go:130] > #
	I0723 14:49:16.940536   48243 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0723 14:49:16.940546   48243 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0723 14:49:16.940554   48243 command_runner.go:130] > #
	I0723 14:49:16.940564   48243 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0723 14:49:16.940575   48243 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0723 14:49:16.940584   48243 command_runner.go:130] > # limitation.
	I0723 14:49:16.940593   48243 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0723 14:49:16.940602   48243 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0723 14:49:16.940608   48243 command_runner.go:130] > runtime_type = "oci"
	I0723 14:49:16.940615   48243 command_runner.go:130] > runtime_root = "/run/runc"
	I0723 14:49:16.940619   48243 command_runner.go:130] > runtime_config_path = ""
	I0723 14:49:16.940628   48243 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0723 14:49:16.940637   48243 command_runner.go:130] > monitor_cgroup = "pod"
	I0723 14:49:16.940650   48243 command_runner.go:130] > monitor_exec_cgroup = ""
	I0723 14:49:16.940658   48243 command_runner.go:130] > monitor_env = [
	I0723 14:49:16.940668   48243 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0723 14:49:16.940676   48243 command_runner.go:130] > ]
	I0723 14:49:16.940684   48243 command_runner.go:130] > privileged_without_host_devices = false
	I0723 14:49:16.940697   48243 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0723 14:49:16.940704   48243 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0723 14:49:16.940713   48243 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0723 14:49:16.940727   48243 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0723 14:49:16.940742   48243 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0723 14:49:16.940753   48243 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0723 14:49:16.940768   48243 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0723 14:49:16.940782   48243 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0723 14:49:16.940789   48243 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0723 14:49:16.940799   48243 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0723 14:49:16.940807   48243 command_runner.go:130] > # Example:
	I0723 14:49:16.940815   48243 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0723 14:49:16.940823   48243 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0723 14:49:16.940830   48243 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0723 14:49:16.940838   48243 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0723 14:49:16.940843   48243 command_runner.go:130] > # cpuset = 0
	I0723 14:49:16.940848   48243 command_runner.go:130] > # cpushares = "0-1"
	I0723 14:49:16.940853   48243 command_runner.go:130] > # Where:
	I0723 14:49:16.940860   48243 command_runner.go:130] > # The workload name is workload-type.
	I0723 14:49:16.940869   48243 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0723 14:49:16.940874   48243 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0723 14:49:16.940879   48243 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0723 14:49:16.940890   48243 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0723 14:49:16.940900   48243 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0723 14:49:16.940907   48243 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0723 14:49:16.940917   48243 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0723 14:49:16.940924   48243 command_runner.go:130] > # Default value is set to true
	I0723 14:49:16.940931   48243 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0723 14:49:16.940939   48243 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0723 14:49:16.940947   48243 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0723 14:49:16.940952   48243 command_runner.go:130] > # Default value is set to 'false'
	I0723 14:49:16.940961   48243 command_runner.go:130] > # disable_hostport_mapping = false
	I0723 14:49:16.940971   48243 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0723 14:49:16.940976   48243 command_runner.go:130] > #
	I0723 14:49:16.940985   48243 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0723 14:49:16.940994   48243 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0723 14:49:16.941004   48243 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0723 14:49:16.941013   48243 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0723 14:49:16.941022   48243 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0723 14:49:16.941027   48243 command_runner.go:130] > [crio.image]
	I0723 14:49:16.941038   48243 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0723 14:49:16.941043   48243 command_runner.go:130] > # default_transport = "docker://"
	I0723 14:49:16.941050   48243 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0723 14:49:16.941056   48243 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0723 14:49:16.941062   48243 command_runner.go:130] > # global_auth_file = ""
	I0723 14:49:16.941067   48243 command_runner.go:130] > # The image used to instantiate infra containers.
	I0723 14:49:16.941077   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.941086   48243 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0723 14:49:16.941099   48243 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0723 14:49:16.941111   48243 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0723 14:49:16.941119   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.941129   48243 command_runner.go:130] > # pause_image_auth_file = ""
	I0723 14:49:16.941137   48243 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0723 14:49:16.941148   48243 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0723 14:49:16.941155   48243 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0723 14:49:16.941160   48243 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0723 14:49:16.941167   48243 command_runner.go:130] > # pause_command = "/pause"
	I0723 14:49:16.941172   48243 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0723 14:49:16.941180   48243 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0723 14:49:16.941194   48243 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0723 14:49:16.941204   48243 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0723 14:49:16.941210   48243 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0723 14:49:16.941221   48243 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0723 14:49:16.941230   48243 command_runner.go:130] > # pinned_images = [
	I0723 14:49:16.941235   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941247   48243 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0723 14:49:16.941260   48243 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0723 14:49:16.941278   48243 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0723 14:49:16.941290   48243 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0723 14:49:16.941298   48243 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0723 14:49:16.941302   48243 command_runner.go:130] > # signature_policy = ""
	I0723 14:49:16.941307   48243 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0723 14:49:16.941316   48243 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0723 14:49:16.941321   48243 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0723 14:49:16.941332   48243 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0723 14:49:16.941338   48243 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0723 14:49:16.941344   48243 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0723 14:49:16.941350   48243 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0723 14:49:16.941358   48243 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0723 14:49:16.941362   48243 command_runner.go:130] > # changing them here.
	I0723 14:49:16.941366   48243 command_runner.go:130] > # insecure_registries = [
	I0723 14:49:16.941371   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941377   48243 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0723 14:49:16.941383   48243 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0723 14:49:16.941387   48243 command_runner.go:130] > # image_volumes = "mkdir"
	I0723 14:49:16.941395   48243 command_runner.go:130] > # Temporary directory to use for storing big files
	I0723 14:49:16.941399   48243 command_runner.go:130] > # big_files_temporary_dir = ""
	I0723 14:49:16.941405   48243 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0723 14:49:16.941410   48243 command_runner.go:130] > # CNI plugins.
	I0723 14:49:16.941414   48243 command_runner.go:130] > [crio.network]
	I0723 14:49:16.941419   48243 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0723 14:49:16.941426   48243 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0723 14:49:16.941431   48243 command_runner.go:130] > # cni_default_network = ""
	I0723 14:49:16.941437   48243 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0723 14:49:16.941441   48243 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0723 14:49:16.941449   48243 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0723 14:49:16.941458   48243 command_runner.go:130] > # plugin_dirs = [
	I0723 14:49:16.941464   48243 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0723 14:49:16.941471   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941476   48243 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0723 14:49:16.941482   48243 command_runner.go:130] > [crio.metrics]
	I0723 14:49:16.941486   48243 command_runner.go:130] > # Globally enable or disable metrics support.
	I0723 14:49:16.941489   48243 command_runner.go:130] > enable_metrics = true
	I0723 14:49:16.941498   48243 command_runner.go:130] > # Specify enabled metrics collectors.
	I0723 14:49:16.941505   48243 command_runner.go:130] > # Per default all metrics are enabled.
	I0723 14:49:16.941510   48243 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0723 14:49:16.941518   48243 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0723 14:49:16.941526   48243 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0723 14:49:16.941532   48243 command_runner.go:130] > # metrics_collectors = [
	I0723 14:49:16.941536   48243 command_runner.go:130] > # 	"operations",
	I0723 14:49:16.941540   48243 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0723 14:49:16.941546   48243 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0723 14:49:16.941550   48243 command_runner.go:130] > # 	"operations_errors",
	I0723 14:49:16.941555   48243 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0723 14:49:16.941558   48243 command_runner.go:130] > # 	"image_pulls_by_name",
	I0723 14:49:16.941564   48243 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0723 14:49:16.941568   48243 command_runner.go:130] > # 	"image_pulls_failures",
	I0723 14:49:16.941576   48243 command_runner.go:130] > # 	"image_pulls_successes",
	I0723 14:49:16.941580   48243 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0723 14:49:16.941584   48243 command_runner.go:130] > # 	"image_layer_reuse",
	I0723 14:49:16.941588   48243 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0723 14:49:16.941592   48243 command_runner.go:130] > # 	"containers_oom_total",
	I0723 14:49:16.941596   48243 command_runner.go:130] > # 	"containers_oom",
	I0723 14:49:16.941599   48243 command_runner.go:130] > # 	"processes_defunct",
	I0723 14:49:16.941603   48243 command_runner.go:130] > # 	"operations_total",
	I0723 14:49:16.941607   48243 command_runner.go:130] > # 	"operations_latency_seconds",
	I0723 14:49:16.941611   48243 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0723 14:49:16.941618   48243 command_runner.go:130] > # 	"operations_errors_total",
	I0723 14:49:16.941622   48243 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0723 14:49:16.941627   48243 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0723 14:49:16.941631   48243 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0723 14:49:16.941637   48243 command_runner.go:130] > # 	"image_pulls_success_total",
	I0723 14:49:16.941641   48243 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0723 14:49:16.941645   48243 command_runner.go:130] > # 	"containers_oom_count_total",
	I0723 14:49:16.941649   48243 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0723 14:49:16.941656   48243 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0723 14:49:16.941659   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941664   48243 command_runner.go:130] > # The port on which the metrics server will listen.
	I0723 14:49:16.941669   48243 command_runner.go:130] > # metrics_port = 9090
	I0723 14:49:16.941680   48243 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0723 14:49:16.941685   48243 command_runner.go:130] > # metrics_socket = ""
	I0723 14:49:16.941690   48243 command_runner.go:130] > # The certificate for the secure metrics server.
	I0723 14:49:16.941697   48243 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0723 14:49:16.941703   48243 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0723 14:49:16.941709   48243 command_runner.go:130] > # certificate on any modification event.
	I0723 14:49:16.941713   48243 command_runner.go:130] > # metrics_cert = ""
	I0723 14:49:16.941720   48243 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0723 14:49:16.941727   48243 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0723 14:49:16.941736   48243 command_runner.go:130] > # metrics_key = ""
	I0723 14:49:16.941743   48243 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0723 14:49:16.941752   48243 command_runner.go:130] > [crio.tracing]
	I0723 14:49:16.941766   48243 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0723 14:49:16.941772   48243 command_runner.go:130] > # enable_tracing = false
	I0723 14:49:16.941779   48243 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0723 14:49:16.941786   48243 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0723 14:49:16.941796   48243 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0723 14:49:16.941807   48243 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0723 14:49:16.941815   48243 command_runner.go:130] > # CRI-O NRI configuration.
	I0723 14:49:16.941823   48243 command_runner.go:130] > [crio.nri]
	I0723 14:49:16.941838   48243 command_runner.go:130] > # Globally enable or disable NRI.
	I0723 14:49:16.941847   48243 command_runner.go:130] > # enable_nri = false
	I0723 14:49:16.941855   48243 command_runner.go:130] > # NRI socket to listen on.
	I0723 14:49:16.941864   48243 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0723 14:49:16.941871   48243 command_runner.go:130] > # NRI plugin directory to use.
	I0723 14:49:16.941882   48243 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0723 14:49:16.941891   48243 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0723 14:49:16.941902   48243 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0723 14:49:16.941912   48243 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0723 14:49:16.941922   48243 command_runner.go:130] > # nri_disable_connections = false
	I0723 14:49:16.941933   48243 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0723 14:49:16.941944   48243 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0723 14:49:16.941952   48243 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0723 14:49:16.941960   48243 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0723 14:49:16.941972   48243 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0723 14:49:16.941982   48243 command_runner.go:130] > [crio.stats]
	I0723 14:49:16.942006   48243 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0723 14:49:16.942017   48243 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0723 14:49:16.942024   48243 command_runner.go:130] > # stats_collection_period = 0
	I0723 14:49:16.942066   48243 command_runner.go:130] ! time="2024-07-23 14:49:16.896508711Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0723 14:49:16.942089   48243 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0723 14:49:16.942249   48243 cni.go:84] Creating CNI manager for ""
	I0723 14:49:16.942263   48243 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0723 14:49:16.942279   48243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:49:16.942306   48243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-574866 NodeName:multinode-574866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:49:16.942493   48243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-574866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:49:16.942579   48243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:49:16.951900   48243 command_runner.go:130] > kubeadm
	I0723 14:49:16.951918   48243 command_runner.go:130] > kubectl
	I0723 14:49:16.951923   48243 command_runner.go:130] > kubelet
	I0723 14:49:16.951941   48243 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:49:16.951997   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 14:49:16.960592   48243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0723 14:49:16.976511   48243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:49:16.993957   48243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0723 14:49:17.008853   48243 ssh_runner.go:195] Run: grep 192.168.39.146	control-plane.minikube.internal$ /etc/hosts
	I0723 14:49:17.012333   48243 command_runner.go:130] > 192.168.39.146	control-plane.minikube.internal
	I0723 14:49:17.012412   48243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:49:17.153458   48243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:49:17.168208   48243 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866 for IP: 192.168.39.146
	I0723 14:49:17.168239   48243 certs.go:194] generating shared ca certs ...
	I0723 14:49:17.168261   48243 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:49:17.168458   48243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:49:17.168498   48243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:49:17.168509   48243 certs.go:256] generating profile certs ...
	I0723 14:49:17.168592   48243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/client.key
	I0723 14:49:17.168659   48243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.key.21b56dd9
	I0723 14:49:17.168693   48243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.key
	I0723 14:49:17.168704   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:49:17.168721   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:49:17.168733   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:49:17.168745   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:49:17.168754   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:49:17.168766   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:49:17.168778   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:49:17.168793   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:49:17.168845   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:49:17.168874   48243 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:49:17.168883   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:49:17.168910   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:49:17.168930   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:49:17.168952   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:49:17.168995   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:49:17.169027   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.169041   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.169054   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.169679   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:49:17.192158   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:49:17.214246   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:49:17.236439   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:49:17.259704   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 14:49:17.282268   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:49:17.304864   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:49:17.326241   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 14:49:17.347480   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:49:17.369796   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:49:17.393320   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:49:17.416250   48243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:49:17.431896   48243 ssh_runner.go:195] Run: openssl version
	I0723 14:49:17.437552   48243 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0723 14:49:17.437633   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:49:17.447948   48243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.452659   48243 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.452732   48243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.452787   48243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.458036   48243 command_runner.go:130] > 3ec20f2e
	I0723 14:49:17.458104   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:49:17.467129   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:49:17.477149   48243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.481537   48243 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.481634   48243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.481694   48243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.487085   48243 command_runner.go:130] > b5213941
	I0723 14:49:17.487163   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:49:17.519490   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:49:17.529979   48243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.534063   48243 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.534091   48243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.534136   48243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.539455   48243 command_runner.go:130] > 51391683
	I0723 14:49:17.539599   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:49:17.548547   48243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:49:17.553088   48243 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:49:17.553109   48243 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0723 14:49:17.553117   48243 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0723 14:49:17.553126   48243 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0723 14:49:17.553135   48243 command_runner.go:130] > Access: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553143   48243 command_runner.go:130] > Modify: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553152   48243 command_runner.go:130] > Change: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553163   48243 command_runner.go:130] >  Birth: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553210   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 14:49:17.558270   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.558447   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 14:49:17.563533   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.563588   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 14:49:17.568752   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.568812   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 14:49:17.573707   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.573847   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 14:49:17.578727   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.578993   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 14:49:17.583983   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.584053   48243 kubeadm.go:392] StartCluster: {Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:49:17.584153   48243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:49:17.584188   48243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:49:17.620436   48243 command_runner.go:130] > 4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b
	I0723 14:49:17.620471   48243 command_runner.go:130] > a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346
	I0723 14:49:17.620481   48243 command_runner.go:130] > 4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d
	I0723 14:49:17.620493   48243 command_runner.go:130] > ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272
	I0723 14:49:17.620502   48243 command_runner.go:130] > 3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a
	I0723 14:49:17.620511   48243 command_runner.go:130] > be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5
	I0723 14:49:17.620521   48243 command_runner.go:130] > 5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051
	I0723 14:49:17.620530   48243 command_runner.go:130] > 905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65
	I0723 14:49:17.620553   48243 cri.go:89] found id: "4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b"
	I0723 14:49:17.620566   48243 cri.go:89] found id: "a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346"
	I0723 14:49:17.620569   48243 cri.go:89] found id: "4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d"
	I0723 14:49:17.620573   48243 cri.go:89] found id: "ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272"
	I0723 14:49:17.620576   48243 cri.go:89] found id: "3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a"
	I0723 14:49:17.620579   48243 cri.go:89] found id: "be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5"
	I0723 14:49:17.620581   48243 cri.go:89] found id: "5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051"
	I0723 14:49:17.620583   48243 cri.go:89] found id: "905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65"
	I0723 14:49:17.620585   48243 cri.go:89] found id: ""
	I0723 14:49:17.620623   48243 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.287912317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746262287885239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3de3e90-5372-4d30-a9da-9ab1a72ea0fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.288568632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d4b57ff-3854-40e5-8c40-5474bd76a185 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.288643280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d4b57ff-3854-40e5-8c40-5474bd76a185 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.288983789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d4b57ff-3854-40e5-8c40-5474bd76a185 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.327248124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1319416-25bb-465d-8bd9-ad590c8f0f46 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.327332982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1319416-25bb-465d-8bd9-ad590c8f0f46 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.329171412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb7208ff-3d79-4fb7-bd81-3fdee9270163 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.329639794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746262329614486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb7208ff-3d79-4fb7-bd81-3fdee9270163 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.330179350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de6b8cdb-942a-45ea-b754-38dba053d466 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.330233494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de6b8cdb-942a-45ea-b754-38dba053d466 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.330634397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de6b8cdb-942a-45ea-b754-38dba053d466 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.368580003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b873817d-bebf-4581-be45-c3855c6b0c22 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.368670858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b873817d-bebf-4581-be45-c3855c6b0c22 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.369945664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ff1daa9-876f-4d99-9088-f3cf5690b0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.370361748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746262370340428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ff1daa9-876f-4d99-9088-f3cf5690b0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.370925667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51b3c6f7-1bf4-4bc6-8158-2af58df2643a name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.370994528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51b3c6f7-1bf4-4bc6-8158-2af58df2643a name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.371321208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51b3c6f7-1bf4-4bc6-8158-2af58df2643a name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.409623501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56abef7f-a181-429c-9a0f-61b2074b6b67 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.409701105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56abef7f-a181-429c-9a0f-61b2074b6b67 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.411092177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96323d1e-7101-4872-a0ec-a8352fca509b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.411554817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746262411531925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96323d1e-7101-4872-a0ec-a8352fca509b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.412144145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27ab365c-c7dc-45cb-b952-f1f8ea0308a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.412233249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27ab365c-c7dc-45cb-b952-f1f8ea0308a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:51:02 multinode-574866 crio[2866]: time="2024-07-23 14:51:02.412618516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27ab365c-c7dc-45cb-b952-f1f8ea0308a4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	14ff66a46fbb2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   dd908861199ee       busybox-fc5497c4f-q96vx
	ffb4c63fdfc46       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   2a5fa619122b1       kindnet-2j56b
	3688b0a09531f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   53af495fc761d       coredns-7db6d8ff4d-8k97t
	8c48fef80c116       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   7e702c094efa7       kube-proxy-6xzc9
	b38848a12257c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   6ff1a483be3d0       storage-provisioner
	f235b9cfb7b3e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   7de7d300aaef8       etcd-multinode-574866
	4767b5c9840d6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   097958c3d04d0       kube-controller-manager-multinode-574866
	edd34678a4337       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   9f53eb965c5d8       kube-apiserver-multinode-574866
	b69c6488bcfdc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   d1e779749e473       kube-scheduler-multinode-574866
	7cd185490625c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   c2131ed8bfd32       busybox-fc5497c4f-q96vx
	4e595d9996574       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   ebeea502a99cc       coredns-7db6d8ff4d-8k97t
	a87ecdc695287       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   c4903b55b1a75       storage-provisioner
	4442a162f2430       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   c6cea513ac543       kindnet-2j56b
	ebf4f61fb738d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   8bdea4fd24e09       kube-proxy-6xzc9
	3140b73105eba       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   e9816133c40d8       kube-scheduler-multinode-574866
	be7075af99a3f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   776b440632be7       kube-apiserver-multinode-574866
	5f7c7a4d6150a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   367e4b5ce253e       etcd-multinode-574866
	905cbfc74b196       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   30c5a92fbce2b       kube-controller-manager-multinode-574866
	
	
	==> coredns [3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47243 - 24721 "HINFO IN 8108853635571185609.4362304376288203997. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014332231s
	
	
	==> coredns [4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b] <==
	[INFO] 10.244.0.3:53170 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002160976s
	[INFO] 10.244.0.3:59966 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090345s
	[INFO] 10.244.0.3:36811 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000513289s
	[INFO] 10.244.0.3:56612 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319214s
	[INFO] 10.244.0.3:54315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067994s
	[INFO] 10.244.0.3:56766 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006233s
	[INFO] 10.244.0.3:43826 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064496s
	[INFO] 10.244.1.2:55191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098956s
	[INFO] 10.244.1.2:50947 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067791s
	[INFO] 10.244.1.2:38966 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064323s
	[INFO] 10.244.1.2:40157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063439s
	[INFO] 10.244.0.3:48325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120507s
	[INFO] 10.244.0.3:55380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074611s
	[INFO] 10.244.0.3:51387 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065366s
	[INFO] 10.244.0.3:44042 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096288s
	[INFO] 10.244.1.2:54659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105366s
	[INFO] 10.244.1.2:59628 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000333846s
	[INFO] 10.244.1.2:33961 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000269519s
	[INFO] 10.244.1.2:41107 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134656s
	[INFO] 10.244.0.3:51347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174612s
	[INFO] 10.244.0.3:37425 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000045024s
	[INFO] 10.244.0.3:58196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000040542s
	[INFO] 10.244.0.3:43409 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059692s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-574866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-574866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=multinode-574866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_42_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:42:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-574866
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:50:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:43:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    multinode-574866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df01da1e1441481fba781beb810260b5
	  System UUID:                df01da1e-1441-481f-ba78-1beb810260b5
	  Boot ID:                    02842110-16cf-4fac-a5da-39b8dc15ce57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q96vx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	  kube-system                 coredns-7db6d8ff4d-8k97t                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 etcd-multinode-574866                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m24s
	  kube-system                 kindnet-2j56b                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m10s
	  kube-system                 kube-apiserver-multinode-574866             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-controller-manager-multinode-574866    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-proxy-6xzc9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-scheduler-multinode-574866             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m9s                 kube-proxy       
	  Normal  Starting                 97s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m23s                kubelet          Node multinode-574866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m23s                kubelet          Node multinode-574866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s                kubelet          Node multinode-574866 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m23s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m10s                node-controller  Node multinode-574866 event: Registered Node multinode-574866 in Controller
	  Normal  NodeReady                7m55s                kubelet          Node multinode-574866 status is now: NodeReady
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node multinode-574866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node multinode-574866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node multinode-574866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                  node-controller  Node multinode-574866 event: Registered Node multinode-574866 in Controller
	
	
	Name:               multinode-574866-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-574866-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=multinode-574866
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_50_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:50:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-574866-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:50:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:50:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:50:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:50:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:50:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    multinode-574866-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2d499d9beff42548473cad134041789
	  System UUID:                d2d499d9-beff-4254-8473-cad134041789
	  Boot ID:                    439afd0a-38f6-4b9e-b0fe-5419c938af12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ztnd7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-xndsk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m26s
	  kube-system                 kube-proxy-jms7l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m21s                  kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    7m26s (x2 over 7m26s)  kubelet          Node multinode-574866-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m26s (x2 over 7m26s)  kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m26s (x2 over 7m26s)  kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m26s                  kubelet          Starting kubelet.
	  Normal  NodeReady                7m6s                   kubelet          Node multinode-574866-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet          Node multinode-574866-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           56s                    node-controller  Node multinode-574866-m02 event: Registered Node multinode-574866-m02 in Controller
	  Normal  NodeReady                41s                    kubelet          Node multinode-574866-m02 status is now: NodeReady
	
	
	Name:               multinode-574866-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-574866-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=multinode-574866
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_50_41_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:50:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-574866-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:50:59 +0000   Tue, 23 Jul 2024 14:50:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:50:59 +0000   Tue, 23 Jul 2024 14:50:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:50:59 +0000   Tue, 23 Jul 2024 14:50:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:50:59 +0000   Tue, 23 Jul 2024 14:50:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    multinode-574866-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d9e805ec33647f188da11d50446c137
	  System UUID:                7d9e805e-c336-47f1-88da-11d50446c137
	  Boot ID:                    2f94172f-5cc8-4e37-8960-d09e9a7c557b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-r7rxq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m32s
	  kube-system                 kube-proxy-48s58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m28s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m38s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m32s (x2 over 6m32s)  kubelet     Node multinode-574866-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s (x2 over 6m32s)  kubelet     Node multinode-574866-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s (x2 over 6m32s)  kubelet     Node multinode-574866-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m13s                  kubelet     Node multinode-574866-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m43s (x2 over 5m43s)  kubelet     Node multinode-574866-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m43s (x2 over 5m43s)  kubelet     Node multinode-574866-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m43s (x2 over 5m43s)  kubelet     Node multinode-574866-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m24s                  kubelet     Node multinode-574866-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-574866-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-574866-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-574866-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-574866-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059608] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.178904] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.120684] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.244065] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +3.859098] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.995628] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.057810] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.975960] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.085325] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.177202] systemd-fstab-generator[1460]: Ignoring "noauto" option for root device
	[  +0.102282] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.026107] kauditd_printk_skb: 56 callbacks suppressed
	[Jul23 14:43] kauditd_printk_skb: 12 callbacks suppressed
	[Jul23 14:49] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.141771] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.164756] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.142324] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.266428] systemd-fstab-generator[2848]: Ignoring "noauto" option for root device
	[  +0.984005] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +2.075961] systemd-fstab-generator[3073]: Ignoring "noauto" option for root device
	[  +5.711528] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.043827] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.830922] systemd-fstab-generator[3907]: Ignoring "noauto" option for root device
	[ +20.879674] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051] <==
	{"level":"info","ts":"2024-07-23T14:42:34.616251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.146:2379"}
	{"level":"info","ts":"2024-07-23T14:42:34.616367Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:42:34.620495Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:42:34.62054Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:42:34.622078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:43:36.909794Z","caller":"traceutil/trace.go:171","msg":"trace[1160322536] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"226.957657ms","start":"2024-07-23T14:43:36.6828Z","end":"2024-07-23T14:43:36.909758Z","steps":["trace[1160322536] 'process raft request'  (duration: 150.914185ms)","trace[1160322536] 'compare'  (duration: 75.897367ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:43:36.91173Z","caller":"traceutil/trace.go:171","msg":"trace[110772945] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"175.145523ms","start":"2024-07-23T14:43:36.736569Z","end":"2024-07-23T14:43:36.911714Z","steps":["trace[110772945] 'process raft request'  (duration: 174.95893ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:43:40.77904Z","caller":"traceutil/trace.go:171","msg":"trace[2130986330] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"110.789592ms","start":"2024-07-23T14:43:40.668235Z","end":"2024-07-23T14:43:40.779024Z","steps":["trace[2130986330] 'process raft request'  (duration: 110.649729ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:43:45.036357Z","caller":"traceutil/trace.go:171","msg":"trace[208837338] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"239.591556ms","start":"2024-07-23T14:43:44.796741Z","end":"2024-07-23T14:43:45.036332Z","steps":["trace[208837338] 'process raft request'  (duration: 239.137032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:44:30.332905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.430893ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751779267824159558 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-574866-m03.17e4debf2368c8c6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-574866-m03.17e4debf2368c8c6\" value_size:642 lease:8751779267824159188 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-23T14:44:30.333105Z","caller":"traceutil/trace.go:171","msg":"trace[2354266] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:610; }","duration":"134.533302ms","start":"2024-07-23T14:44:30.198542Z","end":"2024-07-23T14:44:30.333075Z","steps":["trace[2354266] 'read index received'  (duration: 133.879423ms)","trace[2354266] 'applied index is now lower than readState.Index'  (duration: 653.091µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T14:44:30.333192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.645181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-574866-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-23T14:44:30.333224Z","caller":"traceutil/trace.go:171","msg":"trace[2001289915] range","detail":"{range_begin:/registry/minions/multinode-574866-m03; range_end:; response_count:1; response_revision:573; }","duration":"134.700349ms","start":"2024-07-23T14:44:30.198515Z","end":"2024-07-23T14:44:30.333215Z","steps":["trace[2001289915] 'agreement among raft nodes before linearized reading'  (duration: 134.630139ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:44:30.333315Z","caller":"traceutil/trace.go:171","msg":"trace[841044821] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"238.960366ms","start":"2024-07-23T14:44:30.09434Z","end":"2024-07-23T14:44:30.3333Z","steps":["trace[841044821] 'process raft request'  (duration: 75.313081ms)","trace[841044821] 'compare'  (duration: 162.243825ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:44:30.33335Z","caller":"traceutil/trace.go:171","msg":"trace[575406015] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"170.172067ms","start":"2024-07-23T14:44:30.163172Z","end":"2024-07-23T14:44:30.333344Z","steps":["trace[575406015] 'process raft request'  (duration: 169.855728ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:47:44.194067Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-23T14:47:44.194192Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-574866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	{"level":"warn","ts":"2024-07-23T14:47:44.194353Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:47:44.194501Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:47:44.24689Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:47:44.247125Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T14:47:44.247533Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fc85001aa37e7974","current-leader-member-id":"fc85001aa37e7974"}
	{"level":"info","ts":"2024-07-23T14:47:44.250547Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:47:44.250753Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:47:44.250801Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-574866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	
	
	==> etcd [f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d] <==
	{"level":"info","ts":"2024-07-23T14:49:20.484021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:49:20.484032Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:49:20.490061Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T14:49:20.492768Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fc85001aa37e7974","initial-advertise-peer-urls":["https://192.168.39.146:2380"],"listen-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.146:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T14:49:20.492869Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T14:49:20.493029Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:49:20.495507Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:49:20.484424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 switched to configuration voters=(18195949983872481652)"}
	{"level":"info","ts":"2024-07-23T14:49:20.502596Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","added-peer-id":"fc85001aa37e7974","added-peer-peer-urls":["https://192.168.39.146:2380"]}
	{"level":"info","ts":"2024-07-23T14:49:20.502752Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:49:20.502795Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:49:22.33414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T14:49:22.334266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T14:49:22.334343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgPreVoteResp from fc85001aa37e7974 at term 2"}
	{"level":"info","ts":"2024-07-23T14:49:22.334389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.334415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgVoteResp from fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.334506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became leader at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.334535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fc85001aa37e7974 elected leader fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.339523Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fc85001aa37e7974","local-member-attributes":"{Name:multinode-574866 ClientURLs:[https://192.168.39.146:2379]}","request-path":"/0/members/fc85001aa37e7974/attributes","cluster-id":"25c4f0770a3181de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T14:49:22.33964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:49:22.33967Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:49:22.339805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T14:49:22.340414Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T14:49:22.342337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:49:22.342409Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.146:2379"}
	
	
	==> kernel <==
	 14:51:02 up 8 min,  0 users,  load average: 0.14, 0.21, 0.12
	Linux multinode-574866 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d] <==
	I0723 14:46:57.685380       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:07.684936       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:07.685052       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:07.685236       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:07.685261       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:07.685325       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:07.685344       1 main.go:299] handling current node
	I0723 14:47:17.692546       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:17.692590       1 main.go:299] handling current node
	I0723 14:47:17.692607       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:17.692613       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:17.692751       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:17.692766       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:27.689956       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:27.690233       1 main.go:299] handling current node
	I0723 14:47:27.690302       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:27.690330       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:27.690665       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:27.690708       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:37.690974       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:37.691053       1 main.go:299] handling current node
	I0723 14:47:37.691083       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:37.691095       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:37.691229       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:37.691248       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44] <==
	I0723 14:50:16.082681       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:50:26.081925       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:50:26.082057       1 main.go:299] handling current node
	I0723 14:50:26.082090       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:50:26.082109       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:50:26.082272       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:50:26.082299       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:50:36.082956       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:50:36.083072       1 main.go:299] handling current node
	I0723 14:50:36.083152       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:50:36.083197       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:50:36.083377       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:50:36.083399       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:50:46.085550       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:50:46.085613       1 main.go:299] handling current node
	I0723 14:50:46.085632       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:50:46.085641       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:50:46.085813       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:50:46.085846       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.2.0/24] 
	I0723 14:50:56.085579       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:50:56.085771       1 main.go:299] handling current node
	I0723 14:50:56.085818       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:50:56.085839       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:50:56.085999       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:50:56.086025       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5] <==
	W0723 14:47:44.222914       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.222953       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.222989       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223023       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223057       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223099       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223132       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223260       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223300       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223406       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223531       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223592       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223629       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223671       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223823       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223862       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223899       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223934       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223978       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224022       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224080       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224118       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224159       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224196       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224247       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448] <==
	I0723 14:49:23.611938       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 14:49:23.612583       1 shared_informer.go:320] Caches are synced for configmaps
	I0723 14:49:23.612636       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0723 14:49:23.612643       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0723 14:49:23.622698       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 14:49:23.623274       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0723 14:49:23.629590       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0723 14:49:23.630278       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0723 14:49:23.656005       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 14:49:23.656035       1 aggregator.go:165] initial CRD sync complete...
	I0723 14:49:23.656058       1 autoregister_controller.go:141] Starting autoregister controller
	I0723 14:49:23.656063       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 14:49:23.656068       1 cache.go:39] Caches are synced for autoregister controller
	I0723 14:49:23.678617       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 14:49:23.691099       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 14:49:23.691137       1 policy_source.go:224] refreshing policies
	I0723 14:49:23.695890       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 14:49:24.524998       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0723 14:49:25.663156       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 14:49:25.845310       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0723 14:49:25.867992       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0723 14:49:25.964021       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0723 14:49:25.982783       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0723 14:49:36.638673       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:49:36.844653       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683] <==
	I0723 14:49:37.116485       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:49:37.136632       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:49:37.136667       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0723 14:49:57.938606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.787717ms"
	I0723 14:49:57.938702       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.693µs"
	I0723 14:49:57.948086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.14953ms"
	I0723 14:49:57.948263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.923µs"
	I0723 14:50:02.204822       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m02\" does not exist"
	I0723 14:50:02.217691       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m02" podCIDRs=["10.244.1.0/24"]
	I0723 14:50:03.137025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.711µs"
	I0723 14:50:03.166292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.497µs"
	I0723 14:50:03.178647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="214.146µs"
	I0723 14:50:03.182139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.01µs"
	I0723 14:50:03.184099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.68µs"
	I0723 14:50:07.197044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.867µs"
	I0723 14:50:21.011815       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:50:21.033404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.52µs"
	I0723 14:50:21.048964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.366µs"
	I0723 14:50:24.435736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.21663ms"
	I0723 14:50:24.436001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.082µs"
	I0723 14:50:39.268756       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:50:40.282669       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m03\" does not exist"
	I0723 14:50:40.283290       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:50:40.301193       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m03" podCIDRs=["10.244.2.0/24"]
	I0723 14:50:59.611541       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	
	
	==> kube-controller-manager [905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65] <==
	I0723 14:43:36.915234       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m02\" does not exist"
	I0723 14:43:36.928836       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m02" podCIDRs=["10.244.1.0/24"]
	I0723 14:43:37.131369       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-574866-m02"
	I0723 14:43:56.811011       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:43:59.036281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.725752ms"
	I0723 14:43:59.049912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.57051ms"
	I0723 14:43:59.049987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.188µs"
	I0723 14:43:59.055785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.973µs"
	I0723 14:44:02.628974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.553811ms"
	I0723 14:44:02.629153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.834µs"
	I0723 14:44:02.778908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.834605ms"
	I0723 14:44:02.779138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.046µs"
	I0723 14:44:30.335009       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m03\" does not exist"
	I0723 14:44:30.334984       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:44:30.371654       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m03" podCIDRs=["10.244.2.0/24"]
	I0723 14:44:32.153894       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-574866-m03"
	I0723 14:44:49.677987       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:45:18.019631       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:45:19.523926       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m03\" does not exist"
	I0723 14:45:19.526567       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:45:19.537078       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m03" podCIDRs=["10.244.3.0/24"]
	I0723 14:45:38.908496       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:46:22.206357       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:46:22.275499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.900831ms"
	I0723 14:46:22.276514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.485µs"
	
	
	==> kube-proxy [8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b] <==
	I0723 14:49:25.357503       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:49:25.376767       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0723 14:49:25.498557       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:49:25.498600       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:49:25.498617       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:49:25.507583       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:49:25.507813       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:49:25.507826       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:49:25.514226       1 config.go:192] "Starting service config controller"
	I0723 14:49:25.514251       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:49:25.514274       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:49:25.514278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:49:25.519201       1 config.go:319] "Starting node config controller"
	I0723 14:49:25.519225       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:49:25.615603       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:49:25.615737       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:49:25.623520       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272] <==
	I0723 14:42:53.350793       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:42:53.369193       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0723 14:42:53.458100       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:42:53.458161       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:42:53.458178       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:42:53.460774       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:42:53.460986       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:42:53.461015       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:42:53.462946       1 config.go:192] "Starting service config controller"
	I0723 14:42:53.463241       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:42:53.463294       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:42:53.463300       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:42:53.464072       1 config.go:319] "Starting node config controller"
	I0723 14:42:53.464095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:42:53.563797       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:42:53.563853       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:42:53.564133       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a] <==
	E0723 14:42:36.518546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:42:36.518528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 14:42:36.518600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 14:42:37.361698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 14:42:37.361768       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 14:42:37.447210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.447514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.457302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.457342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.462152       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:42:37.462268       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:42:37.471051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.471167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.472312       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.472343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.609610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 14:42:37.609753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:42:37.685212       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:42:37.685263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:42:37.726671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 14:42:37.726746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 14:42:37.739998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0723 14:42:37.740098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0723 14:42:40.612016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0723 14:47:44.196360       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634] <==
	I0723 14:49:21.166587       1 serving.go:380] Generated self-signed cert in-memory
	W0723 14:49:23.605028       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 14:49:23.605105       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:49:23.605115       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 14:49:23.605121       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 14:49:23.624825       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 14:49:23.624972       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:49:23.633722       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 14:49:23.633773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 14:49:23.634124       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:49:23.633794       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 14:49:23.734939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:49:20 multinode-574866 kubelet[3080]: E0723 14:49:20.404747    3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	Jul 23 14:49:20 multinode-574866 kubelet[3080]: I0723 14:49:20.899427    3080 kubelet_node_status.go:73] "Attempting to register node" node="multinode-574866"
	Jul 23 14:49:23 multinode-574866 kubelet[3080]: I0723 14:49:23.762986    3080 kubelet_node_status.go:112] "Node was previously registered" node="multinode-574866"
	Jul 23 14:49:23 multinode-574866 kubelet[3080]: I0723 14:49:23.763179    3080 kubelet_node_status.go:76] "Successfully registered node" node="multinode-574866"
	Jul 23 14:49:23 multinode-574866 kubelet[3080]: I0723 14:49:23.764690    3080 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 23 14:49:23 multinode-574866 kubelet[3080]: I0723 14:49:23.765774    3080 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.353644    3080 apiserver.go:52] "Watching apiserver"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.356671    3080 topology_manager.go:215] "Topology Admit Handler" podUID="196eb952-ce8f-4cb8-aadf-c62bdfb1375e" podNamespace="kube-system" podName="kindnet-2j56b"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.356876    3080 topology_manager.go:215] "Topology Admit Handler" podUID="7ea62019-9fa6-4ea4-a7ce-1d6990cdc646" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8k97t"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.356981    3080 topology_manager.go:215] "Topology Admit Handler" podUID="fff83ebe-fe7c-4699-94af-849be3c3f3ee" podNamespace="kube-system" podName="kube-proxy-6xzc9"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.357101    3080 topology_manager.go:215] "Topology Admit Handler" podUID="3e769cd6-3fa7-4db4-843c-55ad566c6caf" podNamespace="kube-system" podName="storage-provisioner"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.357173    3080 topology_manager.go:215] "Topology Admit Handler" podUID="ac55b5a2-2f09-4441-8dc7-a80407abaa0a" podNamespace="default" podName="busybox-fc5497c4f-q96vx"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.376253    3080 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.387862    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/196eb952-ce8f-4cb8-aadf-c62bdfb1375e-cni-cfg\") pod \"kindnet-2j56b\" (UID: \"196eb952-ce8f-4cb8-aadf-c62bdfb1375e\") " pod="kube-system/kindnet-2j56b"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.388020    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fff83ebe-fe7c-4699-94af-849be3c3f3ee-lib-modules\") pod \"kube-proxy-6xzc9\" (UID: \"fff83ebe-fe7c-4699-94af-849be3c3f3ee\") " pod="kube-system/kube-proxy-6xzc9"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.389418    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e769cd6-3fa7-4db4-843c-55ad566c6caf-tmp\") pod \"storage-provisioner\" (UID: \"3e769cd6-3fa7-4db4-843c-55ad566c6caf\") " pod="kube-system/storage-provisioner"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.389622    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/196eb952-ce8f-4cb8-aadf-c62bdfb1375e-xtables-lock\") pod \"kindnet-2j56b\" (UID: \"196eb952-ce8f-4cb8-aadf-c62bdfb1375e\") " pod="kube-system/kindnet-2j56b"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.390079    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/196eb952-ce8f-4cb8-aadf-c62bdfb1375e-lib-modules\") pod \"kindnet-2j56b\" (UID: \"196eb952-ce8f-4cb8-aadf-c62bdfb1375e\") " pod="kube-system/kindnet-2j56b"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.390238    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fff83ebe-fe7c-4699-94af-849be3c3f3ee-xtables-lock\") pod \"kube-proxy-6xzc9\" (UID: \"fff83ebe-fe7c-4699-94af-849be3c3f3ee\") " pod="kube-system/kube-proxy-6xzc9"
	Jul 23 14:49:27 multinode-574866 kubelet[3080]: I0723 14:49:27.377758    3080 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 23 14:50:19 multinode-574866 kubelet[3080]: E0723 14:50:19.431959    3080 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 14:51:02.023287   49324 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19319-11303/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-574866 -n multinode-574866
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-574866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (322.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 stop
E0723 14:52:11.818571   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-574866 stop: exit status 82 (2m0.469952142s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-574866-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-574866 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-574866 status: exit status 3 (18.654868814s)

                                                
                                                
-- stdout --
	multinode-574866
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-574866-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 14:53:25.306683   49986 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0723 14:53:25.306716   49986 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-574866 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-574866 -n multinode-574866
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-574866 logs -n 25: (1.387921681s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866:/home/docker/cp-test_multinode-574866-m02_multinode-574866.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866 sudo cat                                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m02_multinode-574866.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03:/home/docker/cp-test_multinode-574866-m02_multinode-574866-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866-m03 sudo cat                                   | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m02_multinode-574866-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp testdata/cp-test.txt                                                | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile418850268/001/cp-test_multinode-574866-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866:/home/docker/cp-test_multinode-574866-m03_multinode-574866.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866 sudo cat                                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m03_multinode-574866.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt                       | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m02:/home/docker/cp-test_multinode-574866-m03_multinode-574866-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n                                                                 | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | multinode-574866-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-574866 ssh -n multinode-574866-m02 sudo cat                                   | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_multinode-574866-m03_multinode-574866-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-574866 node stop m03                                                          | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:44 UTC | 23 Jul 24 14:45 UTC |
	| node    | multinode-574866 node start                                                             | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:45 UTC | 23 Jul 24 14:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-574866                                                                | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:45 UTC |                     |
	| stop    | -p multinode-574866                                                                     | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:45 UTC |                     |
	| start   | -p multinode-574866                                                                     | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:47 UTC | 23 Jul 24 14:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-574866                                                                | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:51 UTC |                     |
	| node    | multinode-574866 node delete                                                            | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:51 UTC | 23 Jul 24 14:51 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-574866 stop                                                                   | multinode-574866 | jenkins | v1.33.1 | 23 Jul 24 14:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:47:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:47:43.368643   48243 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:47:43.368911   48243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:47:43.368920   48243 out.go:304] Setting ErrFile to fd 2...
	I0723 14:47:43.368926   48243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:47:43.369109   48243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:47:43.369675   48243 out.go:298] Setting JSON to false
	I0723 14:47:43.370616   48243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5409,"bootTime":1721740654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:47:43.370672   48243 start.go:139] virtualization: kvm guest
	I0723 14:47:43.372846   48243 out.go:177] * [multinode-574866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 14:47:43.374239   48243 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:47:43.374280   48243 notify.go:220] Checking for updates...
	I0723 14:47:43.376983   48243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:47:43.378304   48243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:47:43.379517   48243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:47:43.380900   48243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:47:43.382108   48243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:47:43.383738   48243 config.go:182] Loaded profile config "multinode-574866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:47:43.383878   48243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:47:43.384347   48243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:47:43.384402   48243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:47:43.400623   48243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0723 14:47:43.400994   48243 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:47:43.401462   48243 main.go:141] libmachine: Using API Version  1
	I0723 14:47:43.401480   48243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:47:43.401921   48243 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:47:43.402122   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:47:43.437618   48243 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 14:47:43.439038   48243 start.go:297] selected driver: kvm2
	I0723 14:47:43.439055   48243 start.go:901] validating driver "kvm2" against &{Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:47:43.439278   48243 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:47:43.439701   48243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:47:43.439785   48243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 14:47:43.454892   48243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 14:47:43.455710   48243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:47:43.455741   48243 cni.go:84] Creating CNI manager for ""
	I0723 14:47:43.455747   48243 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0723 14:47:43.455821   48243 start.go:340] cluster config:
	{Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-574866 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:47:43.455968   48243 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:47:43.457886   48243 out.go:177] * Starting "multinode-574866" primary control-plane node in "multinode-574866" cluster
	I0723 14:47:43.459144   48243 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:47:43.459174   48243 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 14:47:43.459181   48243 cache.go:56] Caching tarball of preloaded images
	I0723 14:47:43.459252   48243 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 14:47:43.459263   48243 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:47:43.459380   48243 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/config.json ...
	I0723 14:47:43.459564   48243 start.go:360] acquireMachinesLock for multinode-574866: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 14:47:43.459600   48243 start.go:364] duration metric: took 20.98µs to acquireMachinesLock for "multinode-574866"
	I0723 14:47:43.459613   48243 start.go:96] Skipping create...Using existing machine configuration
	I0723 14:47:43.459623   48243 fix.go:54] fixHost starting: 
	I0723 14:47:43.459866   48243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:47:43.459894   48243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:47:43.474100   48243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0723 14:47:43.474540   48243 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:47:43.475074   48243 main.go:141] libmachine: Using API Version  1
	I0723 14:47:43.475101   48243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:47:43.475455   48243 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:47:43.475637   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:47:43.475822   48243 main.go:141] libmachine: (multinode-574866) Calling .GetState
	I0723 14:47:43.477578   48243 fix.go:112] recreateIfNeeded on multinode-574866: state=Running err=<nil>
	W0723 14:47:43.477610   48243 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 14:47:43.480746   48243 out.go:177] * Updating the running kvm2 "multinode-574866" VM ...
	I0723 14:47:43.482210   48243 machine.go:94] provisionDockerMachine start ...
	I0723 14:47:43.482231   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:47:43.482486   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.485066   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.485590   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.485617   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.485737   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.485896   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.486070   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.486233   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.486406   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:43.486614   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:43.486627   48243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 14:47:43.599287   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-574866
	
	I0723 14:47:43.599321   48243 main.go:141] libmachine: (multinode-574866) Calling .GetMachineName
	I0723 14:47:43.599544   48243 buildroot.go:166] provisioning hostname "multinode-574866"
	I0723 14:47:43.599566   48243 main.go:141] libmachine: (multinode-574866) Calling .GetMachineName
	I0723 14:47:43.599763   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.602642   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.602956   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.602973   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.603151   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.603322   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.603456   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.603567   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.603736   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:43.603930   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:43.603944   48243 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-574866 && echo "multinode-574866" | sudo tee /etc/hostname
	I0723 14:47:43.725083   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-574866
	
	I0723 14:47:43.725118   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.728059   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.728452   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.728486   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.728610   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.728789   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.728954   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.729085   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.729235   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:43.729401   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:43.729416   48243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-574866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-574866/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-574866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:47:43.839175   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:47:43.839204   48243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 14:47:43.839230   48243 buildroot.go:174] setting up certificates
	I0723 14:47:43.839241   48243 provision.go:84] configureAuth start
	I0723 14:47:43.839253   48243 main.go:141] libmachine: (multinode-574866) Calling .GetMachineName
	I0723 14:47:43.839555   48243 main.go:141] libmachine: (multinode-574866) Calling .GetIP
	I0723 14:47:43.842074   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.842441   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.842468   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.842643   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.844897   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.845312   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.845339   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.845400   48243 provision.go:143] copyHostCerts
	I0723 14:47:43.845433   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:47:43.845465   48243 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 14:47:43.845475   48243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 14:47:43.845540   48243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 14:47:43.845635   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:47:43.845660   48243 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 14:47:43.845667   48243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 14:47:43.845691   48243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 14:47:43.845745   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:47:43.845760   48243 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 14:47:43.845769   48243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 14:47:43.845795   48243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 14:47:43.845851   48243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.multinode-574866 san=[127.0.0.1 192.168.39.146 localhost minikube multinode-574866]
	I0723 14:47:43.900898   48243 provision.go:177] copyRemoteCerts
	I0723 14:47:43.900963   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:47:43.900987   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:43.903874   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.904218   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:43.904249   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:43.904446   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:43.904629   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:43.904785   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:43.904882   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:47:43.990360   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0723 14:47:43.990440   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0723 14:47:44.016378   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0723 14:47:44.016466   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 14:47:44.041436   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0723 14:47:44.041536   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 14:47:44.069079   48243 provision.go:87] duration metric: took 229.826098ms to configureAuth
	I0723 14:47:44.069105   48243 buildroot.go:189] setting minikube options for container-runtime
	I0723 14:47:44.069308   48243 config.go:182] Loaded profile config "multinode-574866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:47:44.069389   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:47:44.072288   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:44.072673   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:47:44.072702   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:47:44.072888   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:47:44.073101   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:44.073272   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:47:44.073492   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:47:44.073684   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:47:44.073846   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:47:44.073860   48243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:49:14.729858   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:49:14.729885   48243 machine.go:97] duration metric: took 1m31.247661262s to provisionDockerMachine
	I0723 14:49:14.729900   48243 start.go:293] postStartSetup for "multinode-574866" (driver="kvm2")
	I0723 14:49:14.729914   48243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:49:14.729934   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.730305   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:49:14.730362   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.733509   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.733934   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.733954   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.734121   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.734297   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.734511   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.734678   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:49:14.822034   48243 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:49:14.826358   48243 command_runner.go:130] > NAME=Buildroot
	I0723 14:49:14.826397   48243 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0723 14:49:14.826404   48243 command_runner.go:130] > ID=buildroot
	I0723 14:49:14.826411   48243 command_runner.go:130] > VERSION_ID=2023.02.9
	I0723 14:49:14.826418   48243 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0723 14:49:14.826471   48243 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 14:49:14.826502   48243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 14:49:14.826576   48243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 14:49:14.826817   48243 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 14:49:14.826836   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /etc/ssl/certs/185032.pem
	I0723 14:49:14.826947   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 14:49:14.836119   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:49:14.858330   48243 start.go:296] duration metric: took 128.416201ms for postStartSetup
	I0723 14:49:14.858394   48243 fix.go:56] duration metric: took 1m31.398769382s for fixHost
	I0723 14:49:14.858423   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.860947   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.861259   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.861287   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.861400   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.861692   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.861846   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.861959   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.862127   48243 main.go:141] libmachine: Using SSH client type: native
	I0723 14:49:14.862286   48243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0723 14:49:14.862310   48243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 14:49:14.970781   48243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721746154.942689956
	
	I0723 14:49:14.970805   48243 fix.go:216] guest clock: 1721746154.942689956
	I0723 14:49:14.970815   48243 fix.go:229] Guest: 2024-07-23 14:49:14.942689956 +0000 UTC Remote: 2024-07-23 14:49:14.858400853 +0000 UTC m=+91.523531233 (delta=84.289103ms)
	I0723 14:49:14.970847   48243 fix.go:200] guest clock delta is within tolerance: 84.289103ms
	I0723 14:49:14.970854   48243 start.go:83] releasing machines lock for "multinode-574866", held for 1m31.511247967s
	I0723 14:49:14.970876   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.971158   48243 main.go:141] libmachine: (multinode-574866) Calling .GetIP
	I0723 14:49:14.973903   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.974291   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.974307   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.974551   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.974947   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.975167   48243 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:49:14.975301   48243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:49:14.975356   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.975405   48243 ssh_runner.go:195] Run: cat /version.json
	I0723 14:49:14.975427   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:49:14.977898   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.977931   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.978225   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.978266   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.978295   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:14.978309   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:14.978397   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.978569   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.978638   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:49:14.978725   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.978766   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:49:14.978990   48243 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:49:14.978999   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:49:14.979140   48243 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:49:15.055241   48243 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0723 14:49:15.055439   48243 ssh_runner.go:195] Run: systemctl --version
	I0723 14:49:15.088002   48243 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0723 14:49:15.088713   48243 command_runner.go:130] > systemd 252 (252)
	I0723 14:49:15.088748   48243 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0723 14:49:15.088816   48243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:49:15.246660   48243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0723 14:49:15.253505   48243 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0723 14:49:15.253656   48243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 14:49:15.253717   48243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:49:15.262251   48243 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 14:49:15.262270   48243 start.go:495] detecting cgroup driver to use...
	I0723 14:49:15.262331   48243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:49:15.277707   48243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:49:15.291470   48243 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:49:15.291529   48243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:49:15.304655   48243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:49:15.317702   48243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:49:15.459715   48243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:49:15.596113   48243 docker.go:233] disabling docker service ...
	I0723 14:49:15.596199   48243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:49:15.612087   48243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:49:15.625351   48243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:49:15.762471   48243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:49:15.897395   48243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:49:15.910475   48243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:49:15.928012   48243 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0723 14:49:15.928508   48243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:49:15.928569   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.938850   48243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:49:15.938924   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.949438   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.959597   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.970042   48243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:49:15.980040   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:15.989668   48243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:16.000156   48243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:49:16.010036   48243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:49:16.018492   48243 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0723 14:49:16.018858   48243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:49:16.027808   48243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:49:16.166504   48243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:49:16.706143   48243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:49:16.706219   48243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:49:16.710639   48243 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0723 14:49:16.710658   48243 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0723 14:49:16.710676   48243 command_runner.go:130] > Device: 0,22	Inode: 1342        Links: 1
	I0723 14:49:16.710682   48243 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0723 14:49:16.710687   48243 command_runner.go:130] > Access: 2024-07-23 14:49:16.574015398 +0000
	I0723 14:49:16.710697   48243 command_runner.go:130] > Modify: 2024-07-23 14:49:16.574015398 +0000
	I0723 14:49:16.710705   48243 command_runner.go:130] > Change: 2024-07-23 14:49:16.574015398 +0000
	I0723 14:49:16.710710   48243 command_runner.go:130] >  Birth: -
	I0723 14:49:16.710748   48243 start.go:563] Will wait 60s for crictl version
	I0723 14:49:16.710785   48243 ssh_runner.go:195] Run: which crictl
	I0723 14:49:16.714000   48243 command_runner.go:130] > /usr/bin/crictl
	I0723 14:49:16.714046   48243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:49:16.749009   48243 command_runner.go:130] > Version:  0.1.0
	I0723 14:49:16.749033   48243 command_runner.go:130] > RuntimeName:  cri-o
	I0723 14:49:16.749040   48243 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0723 14:49:16.749048   48243 command_runner.go:130] > RuntimeApiVersion:  v1
	I0723 14:49:16.749075   48243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 14:49:16.749156   48243 ssh_runner.go:195] Run: crio --version
	I0723 14:49:16.775577   48243 command_runner.go:130] > crio version 1.29.1
	I0723 14:49:16.775600   48243 command_runner.go:130] > Version:        1.29.1
	I0723 14:49:16.775605   48243 command_runner.go:130] > GitCommit:      unknown
	I0723 14:49:16.775609   48243 command_runner.go:130] > GitCommitDate:  unknown
	I0723 14:49:16.775613   48243 command_runner.go:130] > GitTreeState:   clean
	I0723 14:49:16.775619   48243 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0723 14:49:16.775622   48243 command_runner.go:130] > GoVersion:      go1.21.6
	I0723 14:49:16.775627   48243 command_runner.go:130] > Compiler:       gc
	I0723 14:49:16.775631   48243 command_runner.go:130] > Platform:       linux/amd64
	I0723 14:49:16.775634   48243 command_runner.go:130] > Linkmode:       dynamic
	I0723 14:49:16.775638   48243 command_runner.go:130] > BuildTags:      
	I0723 14:49:16.775642   48243 command_runner.go:130] >   containers_image_ostree_stub
	I0723 14:49:16.775645   48243 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0723 14:49:16.775649   48243 command_runner.go:130] >   btrfs_noversion
	I0723 14:49:16.775655   48243 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0723 14:49:16.775661   48243 command_runner.go:130] >   libdm_no_deferred_remove
	I0723 14:49:16.775666   48243 command_runner.go:130] >   seccomp
	I0723 14:49:16.775672   48243 command_runner.go:130] > LDFlags:          unknown
	I0723 14:49:16.775678   48243 command_runner.go:130] > SeccompEnabled:   true
	I0723 14:49:16.775705   48243 command_runner.go:130] > AppArmorEnabled:  false
	I0723 14:49:16.776770   48243 ssh_runner.go:195] Run: crio --version
	I0723 14:49:16.803817   48243 command_runner.go:130] > crio version 1.29.1
	I0723 14:49:16.803839   48243 command_runner.go:130] > Version:        1.29.1
	I0723 14:49:16.803846   48243 command_runner.go:130] > GitCommit:      unknown
	I0723 14:49:16.803850   48243 command_runner.go:130] > GitCommitDate:  unknown
	I0723 14:49:16.803854   48243 command_runner.go:130] > GitTreeState:   clean
	I0723 14:49:16.803860   48243 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0723 14:49:16.803864   48243 command_runner.go:130] > GoVersion:      go1.21.6
	I0723 14:49:16.803868   48243 command_runner.go:130] > Compiler:       gc
	I0723 14:49:16.803874   48243 command_runner.go:130] > Platform:       linux/amd64
	I0723 14:49:16.803878   48243 command_runner.go:130] > Linkmode:       dynamic
	I0723 14:49:16.803881   48243 command_runner.go:130] > BuildTags:      
	I0723 14:49:16.803886   48243 command_runner.go:130] >   containers_image_ostree_stub
	I0723 14:49:16.803889   48243 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0723 14:49:16.803893   48243 command_runner.go:130] >   btrfs_noversion
	I0723 14:49:16.803898   48243 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0723 14:49:16.803902   48243 command_runner.go:130] >   libdm_no_deferred_remove
	I0723 14:49:16.803906   48243 command_runner.go:130] >   seccomp
	I0723 14:49:16.803910   48243 command_runner.go:130] > LDFlags:          unknown
	I0723 14:49:16.803917   48243 command_runner.go:130] > SeccompEnabled:   true
	I0723 14:49:16.803922   48243 command_runner.go:130] > AppArmorEnabled:  false
	I0723 14:49:16.807103   48243 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 14:49:16.808430   48243 main.go:141] libmachine: (multinode-574866) Calling .GetIP
	I0723 14:49:16.811403   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:16.811787   48243 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:49:16.811808   48243 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:49:16.812075   48243 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 14:49:16.816109   48243 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0723 14:49:16.816193   48243 kubeadm.go:883] updating cluster {Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:49:16.816368   48243 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:49:16.816414   48243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:49:16.859703   48243 command_runner.go:130] > {
	I0723 14:49:16.859725   48243 command_runner.go:130] >   "images": [
	I0723 14:49:16.859729   48243 command_runner.go:130] >     {
	I0723 14:49:16.859736   48243 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0723 14:49:16.859741   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859747   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0723 14:49:16.859751   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859755   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859763   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0723 14:49:16.859770   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0723 14:49:16.859775   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859779   48243 command_runner.go:130] >       "size": "87165492",
	I0723 14:49:16.859783   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.859787   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.859792   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.859797   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.859800   48243 command_runner.go:130] >     },
	I0723 14:49:16.859804   48243 command_runner.go:130] >     {
	I0723 14:49:16.859811   48243 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0723 14:49:16.859816   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859824   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0723 14:49:16.859828   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859833   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859840   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0723 14:49:16.859849   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0723 14:49:16.859852   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859856   48243 command_runner.go:130] >       "size": "87174707",
	I0723 14:49:16.859861   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.859869   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.859873   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.859877   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.859880   48243 command_runner.go:130] >     },
	I0723 14:49:16.859884   48243 command_runner.go:130] >     {
	I0723 14:49:16.859889   48243 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0723 14:49:16.859894   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859903   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0723 14:49:16.859909   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859913   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859919   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0723 14:49:16.859929   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0723 14:49:16.859934   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859938   48243 command_runner.go:130] >       "size": "1363676",
	I0723 14:49:16.859942   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.859948   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.859953   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.859957   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.859960   48243 command_runner.go:130] >     },
	I0723 14:49:16.859963   48243 command_runner.go:130] >     {
	I0723 14:49:16.859969   48243 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0723 14:49:16.859974   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.859979   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0723 14:49:16.859984   48243 command_runner.go:130] >       ],
	I0723 14:49:16.859988   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.859995   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0723 14:49:16.860009   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0723 14:49:16.860014   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860019   48243 command_runner.go:130] >       "size": "31470524",
	I0723 14:49:16.860022   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.860026   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860032   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860036   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860040   48243 command_runner.go:130] >     },
	I0723 14:49:16.860043   48243 command_runner.go:130] >     {
	I0723 14:49:16.860050   48243 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0723 14:49:16.860055   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860061   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0723 14:49:16.860067   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860070   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860084   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0723 14:49:16.860094   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0723 14:49:16.860098   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860106   48243 command_runner.go:130] >       "size": "61245718",
	I0723 14:49:16.860117   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.860124   48243 command_runner.go:130] >       "username": "nonroot",
	I0723 14:49:16.860133   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860139   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860145   48243 command_runner.go:130] >     },
	I0723 14:49:16.860151   48243 command_runner.go:130] >     {
	I0723 14:49:16.860157   48243 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0723 14:49:16.860162   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860166   48243 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0723 14:49:16.860171   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860175   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860184   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0723 14:49:16.860190   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0723 14:49:16.860196   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860200   48243 command_runner.go:130] >       "size": "150779692",
	I0723 14:49:16.860205   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860209   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860215   48243 command_runner.go:130] >       },
	I0723 14:49:16.860219   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860225   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860233   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860238   48243 command_runner.go:130] >     },
	I0723 14:49:16.860246   48243 command_runner.go:130] >     {
	I0723 14:49:16.860256   48243 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0723 14:49:16.860265   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860272   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0723 14:49:16.860279   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860285   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860299   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0723 14:49:16.860316   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0723 14:49:16.860322   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860327   48243 command_runner.go:130] >       "size": "117609954",
	I0723 14:49:16.860331   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860341   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860347   48243 command_runner.go:130] >       },
	I0723 14:49:16.860355   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860362   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860366   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860369   48243 command_runner.go:130] >     },
	I0723 14:49:16.860372   48243 command_runner.go:130] >     {
	I0723 14:49:16.860378   48243 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0723 14:49:16.860384   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860389   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0723 14:49:16.860393   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860397   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860419   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0723 14:49:16.860430   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0723 14:49:16.860433   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860436   48243 command_runner.go:130] >       "size": "112198984",
	I0723 14:49:16.860440   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860446   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860450   48243 command_runner.go:130] >       },
	I0723 14:49:16.860454   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860457   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860461   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860464   48243 command_runner.go:130] >     },
	I0723 14:49:16.860467   48243 command_runner.go:130] >     {
	I0723 14:49:16.860473   48243 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0723 14:49:16.860477   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860481   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0723 14:49:16.860484   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860488   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860495   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0723 14:49:16.860501   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0723 14:49:16.860504   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860513   48243 command_runner.go:130] >       "size": "85953945",
	I0723 14:49:16.860516   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.860520   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860523   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860526   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860529   48243 command_runner.go:130] >     },
	I0723 14:49:16.860536   48243 command_runner.go:130] >     {
	I0723 14:49:16.860542   48243 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0723 14:49:16.860545   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860559   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0723 14:49:16.860562   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860566   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860573   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0723 14:49:16.860579   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0723 14:49:16.860582   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860586   48243 command_runner.go:130] >       "size": "63051080",
	I0723 14:49:16.860589   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860592   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.860595   48243 command_runner.go:130] >       },
	I0723 14:49:16.860598   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860602   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860607   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.860610   48243 command_runner.go:130] >     },
	I0723 14:49:16.860616   48243 command_runner.go:130] >     {
	I0723 14:49:16.860622   48243 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0723 14:49:16.860627   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.860632   48243 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0723 14:49:16.860637   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860640   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.860647   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0723 14:49:16.860655   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0723 14:49:16.860659   48243 command_runner.go:130] >       ],
	I0723 14:49:16.860665   48243 command_runner.go:130] >       "size": "750414",
	I0723 14:49:16.860669   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.860675   48243 command_runner.go:130] >         "value": "65535"
	I0723 14:49:16.860678   48243 command_runner.go:130] >       },
	I0723 14:49:16.860681   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.860685   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.860691   48243 command_runner.go:130] >       "pinned": true
	I0723 14:49:16.860694   48243 command_runner.go:130] >     }
	I0723 14:49:16.860698   48243 command_runner.go:130] >   ]
	I0723 14:49:16.860700   48243 command_runner.go:130] > }
	I0723 14:49:16.860869   48243 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:49:16.860879   48243 crio.go:433] Images already preloaded, skipping extraction
	I0723 14:49:16.860931   48243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:49:16.892664   48243 command_runner.go:130] > {
	I0723 14:49:16.892689   48243 command_runner.go:130] >   "images": [
	I0723 14:49:16.892695   48243 command_runner.go:130] >     {
	I0723 14:49:16.892708   48243 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0723 14:49:16.892714   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.892724   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0723 14:49:16.892729   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892735   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.892751   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0723 14:49:16.892761   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0723 14:49:16.892766   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892773   48243 command_runner.go:130] >       "size": "87165492",
	I0723 14:49:16.892784   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.892791   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.892801   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.892808   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.892816   48243 command_runner.go:130] >     },
	I0723 14:49:16.892821   48243 command_runner.go:130] >     {
	I0723 14:49:16.892828   48243 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0723 14:49:16.892835   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.892839   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0723 14:49:16.892847   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892853   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.892869   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0723 14:49:16.892885   48243 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0723 14:49:16.892893   48243 command_runner.go:130] >       ],
	I0723 14:49:16.892901   48243 command_runner.go:130] >       "size": "87174707",
	I0723 14:49:16.892910   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.892922   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.892930   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.892935   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.892939   48243 command_runner.go:130] >     },
	I0723 14:49:16.892953   48243 command_runner.go:130] >     {
	I0723 14:49:16.892967   48243 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0723 14:49:16.892977   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.892988   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0723 14:49:16.892997   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893006   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893020   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0723 14:49:16.893032   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0723 14:49:16.893041   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893051   48243 command_runner.go:130] >       "size": "1363676",
	I0723 14:49:16.893060   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893067   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893077   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893087   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893095   48243 command_runner.go:130] >     },
	I0723 14:49:16.893103   48243 command_runner.go:130] >     {
	I0723 14:49:16.893116   48243 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0723 14:49:16.893124   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893133   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0723 14:49:16.893139   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893146   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893161   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0723 14:49:16.893184   48243 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0723 14:49:16.893193   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893200   48243 command_runner.go:130] >       "size": "31470524",
	I0723 14:49:16.893206   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893215   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893223   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893233   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893240   48243 command_runner.go:130] >     },
	I0723 14:49:16.893246   48243 command_runner.go:130] >     {
	I0723 14:49:16.893258   48243 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0723 14:49:16.893266   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893274   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0723 14:49:16.893283   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893289   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893310   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0723 14:49:16.893324   48243 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0723 14:49:16.893333   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893342   48243 command_runner.go:130] >       "size": "61245718",
	I0723 14:49:16.893351   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893362   48243 command_runner.go:130] >       "username": "nonroot",
	I0723 14:49:16.893367   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893370   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893374   48243 command_runner.go:130] >     },
	I0723 14:49:16.893377   48243 command_runner.go:130] >     {
	I0723 14:49:16.893383   48243 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0723 14:49:16.893389   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893394   48243 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0723 14:49:16.893398   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893402   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893410   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0723 14:49:16.893419   48243 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0723 14:49:16.893424   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893428   48243 command_runner.go:130] >       "size": "150779692",
	I0723 14:49:16.893434   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893438   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893442   48243 command_runner.go:130] >       },
	I0723 14:49:16.893446   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893452   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893455   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893459   48243 command_runner.go:130] >     },
	I0723 14:49:16.893462   48243 command_runner.go:130] >     {
	I0723 14:49:16.893469   48243 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0723 14:49:16.893473   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893478   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0723 14:49:16.893483   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893487   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893494   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0723 14:49:16.893503   48243 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0723 14:49:16.893508   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893512   48243 command_runner.go:130] >       "size": "117609954",
	I0723 14:49:16.893523   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893529   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893532   48243 command_runner.go:130] >       },
	I0723 14:49:16.893538   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893542   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893553   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893558   48243 command_runner.go:130] >     },
	I0723 14:49:16.893561   48243 command_runner.go:130] >     {
	I0723 14:49:16.893567   48243 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0723 14:49:16.893572   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893577   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0723 14:49:16.893583   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893587   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893616   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0723 14:49:16.893626   48243 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0723 14:49:16.893629   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893633   48243 command_runner.go:130] >       "size": "112198984",
	I0723 14:49:16.893637   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893641   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893646   48243 command_runner.go:130] >       },
	I0723 14:49:16.893650   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893654   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893657   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893660   48243 command_runner.go:130] >     },
	I0723 14:49:16.893664   48243 command_runner.go:130] >     {
	I0723 14:49:16.893669   48243 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0723 14:49:16.893676   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893681   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0723 14:49:16.893686   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893690   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893697   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0723 14:49:16.893704   48243 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0723 14:49:16.893707   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893710   48243 command_runner.go:130] >       "size": "85953945",
	I0723 14:49:16.893714   48243 command_runner.go:130] >       "uid": null,
	I0723 14:49:16.893718   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893725   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893729   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893732   48243 command_runner.go:130] >     },
	I0723 14:49:16.893735   48243 command_runner.go:130] >     {
	I0723 14:49:16.893740   48243 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0723 14:49:16.893744   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893748   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0723 14:49:16.893751   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893755   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893761   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0723 14:49:16.893767   48243 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0723 14:49:16.893771   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893774   48243 command_runner.go:130] >       "size": "63051080",
	I0723 14:49:16.893777   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893781   48243 command_runner.go:130] >         "value": "0"
	I0723 14:49:16.893784   48243 command_runner.go:130] >       },
	I0723 14:49:16.893788   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893791   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893795   48243 command_runner.go:130] >       "pinned": false
	I0723 14:49:16.893798   48243 command_runner.go:130] >     },
	I0723 14:49:16.893801   48243 command_runner.go:130] >     {
	I0723 14:49:16.893807   48243 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0723 14:49:16.893810   48243 command_runner.go:130] >       "repoTags": [
	I0723 14:49:16.893815   48243 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0723 14:49:16.893818   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893822   48243 command_runner.go:130] >       "repoDigests": [
	I0723 14:49:16.893829   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0723 14:49:16.893839   48243 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0723 14:49:16.893844   48243 command_runner.go:130] >       ],
	I0723 14:49:16.893848   48243 command_runner.go:130] >       "size": "750414",
	I0723 14:49:16.893851   48243 command_runner.go:130] >       "uid": {
	I0723 14:49:16.893855   48243 command_runner.go:130] >         "value": "65535"
	I0723 14:49:16.893858   48243 command_runner.go:130] >       },
	I0723 14:49:16.893862   48243 command_runner.go:130] >       "username": "",
	I0723 14:49:16.893869   48243 command_runner.go:130] >       "spec": null,
	I0723 14:49:16.893872   48243 command_runner.go:130] >       "pinned": true
	I0723 14:49:16.893880   48243 command_runner.go:130] >     }
	I0723 14:49:16.893885   48243 command_runner.go:130] >   ]
	I0723 14:49:16.893888   48243 command_runner.go:130] > }
	I0723 14:49:16.893996   48243 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:49:16.894007   48243 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:49:16.894014   48243 kubeadm.go:934] updating node { 192.168.39.146 8443 v1.30.3 crio true true} ...
	I0723 14:49:16.894114   48243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-574866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:49:16.894181   48243 ssh_runner.go:195] Run: crio config
	I0723 14:49:16.936698   48243 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0723 14:49:16.936727   48243 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0723 14:49:16.936735   48243 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0723 14:49:16.936739   48243 command_runner.go:130] > #
	I0723 14:49:16.936749   48243 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0723 14:49:16.936758   48243 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0723 14:49:16.936766   48243 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0723 14:49:16.936778   48243 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0723 14:49:16.936784   48243 command_runner.go:130] > # reload'.
	I0723 14:49:16.936792   48243 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0723 14:49:16.936802   48243 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0723 14:49:16.936812   48243 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0723 14:49:16.936828   48243 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0723 14:49:16.936834   48243 command_runner.go:130] > [crio]
	I0723 14:49:16.936845   48243 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0723 14:49:16.936855   48243 command_runner.go:130] > # containers images, in this directory.
	I0723 14:49:16.936863   48243 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0723 14:49:16.936878   48243 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0723 14:49:16.936889   48243 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0723 14:49:16.936903   48243 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0723 14:49:16.936912   48243 command_runner.go:130] > # imagestore = ""
	I0723 14:49:16.936926   48243 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0723 14:49:16.936939   48243 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0723 14:49:16.936963   48243 command_runner.go:130] > storage_driver = "overlay"
	I0723 14:49:16.936975   48243 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0723 14:49:16.936984   48243 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0723 14:49:16.936994   48243 command_runner.go:130] > storage_option = [
	I0723 14:49:16.937009   48243 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0723 14:49:16.937017   48243 command_runner.go:130] > ]
	I0723 14:49:16.937028   48243 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0723 14:49:16.937040   48243 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0723 14:49:16.937051   48243 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0723 14:49:16.937064   48243 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0723 14:49:16.937077   48243 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0723 14:49:16.937087   48243 command_runner.go:130] > # always happen on a node reboot
	I0723 14:49:16.937098   48243 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0723 14:49:16.937119   48243 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0723 14:49:16.937132   48243 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0723 14:49:16.937143   48243 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0723 14:49:16.937152   48243 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0723 14:49:16.937166   48243 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0723 14:49:16.937181   48243 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0723 14:49:16.937191   48243 command_runner.go:130] > # internal_wipe = true
	I0723 14:49:16.937205   48243 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0723 14:49:16.937217   48243 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0723 14:49:16.937227   48243 command_runner.go:130] > # internal_repair = false
	I0723 14:49:16.937239   48243 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0723 14:49:16.937250   48243 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0723 14:49:16.937263   48243 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0723 14:49:16.937275   48243 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0723 14:49:16.937287   48243 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0723 14:49:16.937296   48243 command_runner.go:130] > [crio.api]
	I0723 14:49:16.937305   48243 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0723 14:49:16.937315   48243 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0723 14:49:16.937338   48243 command_runner.go:130] > # IP address on which the stream server will listen.
	I0723 14:49:16.937348   48243 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0723 14:49:16.937359   48243 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0723 14:49:16.937370   48243 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0723 14:49:16.937379   48243 command_runner.go:130] > # stream_port = "0"
	I0723 14:49:16.937396   48243 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0723 14:49:16.937405   48243 command_runner.go:130] > # stream_enable_tls = false
	I0723 14:49:16.937416   48243 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0723 14:49:16.937425   48243 command_runner.go:130] > # stream_idle_timeout = ""
	I0723 14:49:16.937438   48243 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0723 14:49:16.937449   48243 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0723 14:49:16.937454   48243 command_runner.go:130] > # minutes.
	I0723 14:49:16.937459   48243 command_runner.go:130] > # stream_tls_cert = ""
	I0723 14:49:16.937467   48243 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0723 14:49:16.937475   48243 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0723 14:49:16.937481   48243 command_runner.go:130] > # stream_tls_key = ""
	I0723 14:49:16.937489   48243 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0723 14:49:16.937498   48243 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0723 14:49:16.937532   48243 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0723 14:49:16.937543   48243 command_runner.go:130] > # stream_tls_ca = ""
	I0723 14:49:16.937558   48243 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0723 14:49:16.937568   48243 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0723 14:49:16.937579   48243 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0723 14:49:16.937587   48243 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0723 14:49:16.937600   48243 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0723 14:49:16.937620   48243 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0723 14:49:16.937629   48243 command_runner.go:130] > [crio.runtime]
	I0723 14:49:16.937640   48243 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0723 14:49:16.937652   48243 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0723 14:49:16.937661   48243 command_runner.go:130] > # "nofile=1024:2048"
	I0723 14:49:16.937671   48243 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0723 14:49:16.937680   48243 command_runner.go:130] > # default_ulimits = [
	I0723 14:49:16.937688   48243 command_runner.go:130] > # ]
	I0723 14:49:16.937699   48243 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0723 14:49:16.937707   48243 command_runner.go:130] > # no_pivot = false
	I0723 14:49:16.937717   48243 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0723 14:49:16.937730   48243 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0723 14:49:16.937741   48243 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0723 14:49:16.937752   48243 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0723 14:49:16.937761   48243 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0723 14:49:16.937776   48243 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0723 14:49:16.937792   48243 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0723 14:49:16.937802   48243 command_runner.go:130] > # Cgroup setting for conmon
	I0723 14:49:16.937816   48243 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0723 14:49:16.937825   48243 command_runner.go:130] > conmon_cgroup = "pod"
	I0723 14:49:16.937838   48243 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0723 14:49:16.937849   48243 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0723 14:49:16.937859   48243 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0723 14:49:16.937864   48243 command_runner.go:130] > conmon_env = [
	I0723 14:49:16.937872   48243 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0723 14:49:16.937877   48243 command_runner.go:130] > ]
	I0723 14:49:16.937885   48243 command_runner.go:130] > # Additional environment variables to set for all the
	I0723 14:49:16.937894   48243 command_runner.go:130] > # containers. These are overridden if set in the
	I0723 14:49:16.937903   48243 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0723 14:49:16.937912   48243 command_runner.go:130] > # default_env = [
	I0723 14:49:16.937917   48243 command_runner.go:130] > # ]
	I0723 14:49:16.937929   48243 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0723 14:49:16.937940   48243 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0723 14:49:16.937947   48243 command_runner.go:130] > # selinux = false
	I0723 14:49:16.937957   48243 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0723 14:49:16.937967   48243 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0723 14:49:16.937976   48243 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0723 14:49:16.937985   48243 command_runner.go:130] > # seccomp_profile = ""
	I0723 14:49:16.937993   48243 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0723 14:49:16.938004   48243 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0723 14:49:16.938014   48243 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0723 14:49:16.938024   48243 command_runner.go:130] > # which might increase security.
	I0723 14:49:16.938031   48243 command_runner.go:130] > # This option is currently deprecated,
	I0723 14:49:16.938042   48243 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0723 14:49:16.938046   48243 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0723 14:49:16.938054   48243 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0723 14:49:16.938060   48243 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0723 14:49:16.938068   48243 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0723 14:49:16.938074   48243 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0723 14:49:16.938081   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.938085   48243 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0723 14:49:16.938091   48243 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0723 14:49:16.938105   48243 command_runner.go:130] > # the cgroup blockio controller.
	I0723 14:49:16.938115   48243 command_runner.go:130] > # blockio_config_file = ""
	I0723 14:49:16.938127   48243 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0723 14:49:16.938136   48243 command_runner.go:130] > # blockio parameters.
	I0723 14:49:16.938143   48243 command_runner.go:130] > # blockio_reload = false
	I0723 14:49:16.938153   48243 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0723 14:49:16.938162   48243 command_runner.go:130] > # irqbalance daemon.
	I0723 14:49:16.938170   48243 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0723 14:49:16.938182   48243 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0723 14:49:16.938195   48243 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0723 14:49:16.938208   48243 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0723 14:49:16.938220   48243 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0723 14:49:16.938233   48243 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0723 14:49:16.938242   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.938247   48243 command_runner.go:130] > # rdt_config_file = ""
	I0723 14:49:16.938251   48243 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0723 14:49:16.938257   48243 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0723 14:49:16.938327   48243 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0723 14:49:16.938342   48243 command_runner.go:130] > # separate_pull_cgroup = ""
	I0723 14:49:16.938351   48243 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0723 14:49:16.938360   48243 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0723 14:49:16.938370   48243 command_runner.go:130] > # will be added.
	I0723 14:49:16.938391   48243 command_runner.go:130] > # default_capabilities = [
	I0723 14:49:16.938400   48243 command_runner.go:130] > # 	"CHOWN",
	I0723 14:49:16.938407   48243 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0723 14:49:16.938415   48243 command_runner.go:130] > # 	"FSETID",
	I0723 14:49:16.938421   48243 command_runner.go:130] > # 	"FOWNER",
	I0723 14:49:16.938430   48243 command_runner.go:130] > # 	"SETGID",
	I0723 14:49:16.938437   48243 command_runner.go:130] > # 	"SETUID",
	I0723 14:49:16.938443   48243 command_runner.go:130] > # 	"SETPCAP",
	I0723 14:49:16.938449   48243 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0723 14:49:16.938455   48243 command_runner.go:130] > # 	"KILL",
	I0723 14:49:16.938460   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938474   48243 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0723 14:49:16.938487   48243 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0723 14:49:16.938497   48243 command_runner.go:130] > # add_inheritable_capabilities = false
	I0723 14:49:16.938515   48243 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0723 14:49:16.938524   48243 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0723 14:49:16.938528   48243 command_runner.go:130] > default_sysctls = [
	I0723 14:49:16.938533   48243 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0723 14:49:16.938539   48243 command_runner.go:130] > ]
	I0723 14:49:16.938543   48243 command_runner.go:130] > # List of devices on the host that a
	I0723 14:49:16.938549   48243 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0723 14:49:16.938554   48243 command_runner.go:130] > # allowed_devices = [
	I0723 14:49:16.938559   48243 command_runner.go:130] > # 	"/dev/fuse",
	I0723 14:49:16.938562   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938567   48243 command_runner.go:130] > # List of additional devices. specified as
	I0723 14:49:16.938575   48243 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0723 14:49:16.938582   48243 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0723 14:49:16.938587   48243 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0723 14:49:16.938594   48243 command_runner.go:130] > # additional_devices = [
	I0723 14:49:16.938597   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938602   48243 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0723 14:49:16.938608   48243 command_runner.go:130] > # cdi_spec_dirs = [
	I0723 14:49:16.938612   48243 command_runner.go:130] > # 	"/etc/cdi",
	I0723 14:49:16.938617   48243 command_runner.go:130] > # 	"/var/run/cdi",
	I0723 14:49:16.938621   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938632   48243 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0723 14:49:16.938637   48243 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0723 14:49:16.938643   48243 command_runner.go:130] > # Defaults to false.
	I0723 14:49:16.938650   48243 command_runner.go:130] > # device_ownership_from_security_context = false
	I0723 14:49:16.938662   48243 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0723 14:49:16.938675   48243 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0723 14:49:16.938684   48243 command_runner.go:130] > # hooks_dir = [
	I0723 14:49:16.938698   48243 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0723 14:49:16.938707   48243 command_runner.go:130] > # ]
	I0723 14:49:16.938717   48243 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0723 14:49:16.938733   48243 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0723 14:49:16.938743   48243 command_runner.go:130] > # its default mounts from the following two files:
	I0723 14:49:16.938750   48243 command_runner.go:130] > #
	I0723 14:49:16.938760   48243 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0723 14:49:16.938773   48243 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0723 14:49:16.938790   48243 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0723 14:49:16.938796   48243 command_runner.go:130] > #
	I0723 14:49:16.938801   48243 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0723 14:49:16.938809   48243 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0723 14:49:16.938815   48243 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0723 14:49:16.938822   48243 command_runner.go:130] > #      only add mounts it finds in this file.
	I0723 14:49:16.938825   48243 command_runner.go:130] > #
	I0723 14:49:16.938829   48243 command_runner.go:130] > # default_mounts_file = ""
	I0723 14:49:16.938836   48243 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0723 14:49:16.938848   48243 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0723 14:49:16.938859   48243 command_runner.go:130] > pids_limit = 1024
	I0723 14:49:16.938872   48243 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0723 14:49:16.938884   48243 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0723 14:49:16.938898   48243 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0723 14:49:16.938912   48243 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0723 14:49:16.938922   48243 command_runner.go:130] > # log_size_max = -1
	I0723 14:49:16.938932   48243 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0723 14:49:16.938941   48243 command_runner.go:130] > # log_to_journald = false
	I0723 14:49:16.938950   48243 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0723 14:49:16.938960   48243 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0723 14:49:16.938972   48243 command_runner.go:130] > # Path to directory for container attach sockets.
	I0723 14:49:16.938978   48243 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0723 14:49:16.938983   48243 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0723 14:49:16.938989   48243 command_runner.go:130] > # bind_mount_prefix = ""
	I0723 14:49:16.938994   48243 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0723 14:49:16.939000   48243 command_runner.go:130] > # read_only = false
	I0723 14:49:16.939008   48243 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0723 14:49:16.939020   48243 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0723 14:49:16.939030   48243 command_runner.go:130] > # live configuration reload.
	I0723 14:49:16.939037   48243 command_runner.go:130] > # log_level = "info"
	I0723 14:49:16.939046   48243 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0723 14:49:16.939057   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.939065   48243 command_runner.go:130] > # log_filter = ""
	I0723 14:49:16.939078   48243 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0723 14:49:16.939093   48243 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0723 14:49:16.939103   48243 command_runner.go:130] > # separated by comma.
	I0723 14:49:16.939120   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939130   48243 command_runner.go:130] > # uid_mappings = ""
	I0723 14:49:16.939139   48243 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0723 14:49:16.939150   48243 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0723 14:49:16.939159   48243 command_runner.go:130] > # separated by comma.
	I0723 14:49:16.939174   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939183   48243 command_runner.go:130] > # gid_mappings = ""
	I0723 14:49:16.939192   48243 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0723 14:49:16.939205   48243 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0723 14:49:16.939218   48243 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0723 14:49:16.939232   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939241   48243 command_runner.go:130] > # minimum_mappable_uid = -1
	I0723 14:49:16.939248   48243 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0723 14:49:16.939258   48243 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0723 14:49:16.939269   48243 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0723 14:49:16.939284   48243 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0723 14:49:16.939294   48243 command_runner.go:130] > # minimum_mappable_gid = -1
	I0723 14:49:16.939304   48243 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0723 14:49:16.939316   48243 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0723 14:49:16.939331   48243 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0723 14:49:16.939338   48243 command_runner.go:130] > # ctr_stop_timeout = 30
	I0723 14:49:16.939346   48243 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0723 14:49:16.939358   48243 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0723 14:49:16.939369   48243 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0723 14:49:16.939379   48243 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0723 14:49:16.939389   48243 command_runner.go:130] > drop_infra_ctr = false
	I0723 14:49:16.939398   48243 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0723 14:49:16.939409   48243 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0723 14:49:16.939420   48243 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0723 14:49:16.939426   48243 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0723 14:49:16.939436   48243 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0723 14:49:16.939449   48243 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0723 14:49:16.939460   48243 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0723 14:49:16.939471   48243 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0723 14:49:16.939480   48243 command_runner.go:130] > # shared_cpuset = ""
	I0723 14:49:16.939490   48243 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0723 14:49:16.939506   48243 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0723 14:49:16.939513   48243 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0723 14:49:16.939524   48243 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0723 14:49:16.939535   48243 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0723 14:49:16.939550   48243 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0723 14:49:16.939563   48243 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0723 14:49:16.939573   48243 command_runner.go:130] > # enable_criu_support = false
	I0723 14:49:16.939583   48243 command_runner.go:130] > # Enable/disable the generation of the container,
	I0723 14:49:16.939593   48243 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0723 14:49:16.939601   48243 command_runner.go:130] > # enable_pod_events = false
	I0723 14:49:16.939613   48243 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0723 14:49:16.939626   48243 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0723 14:49:16.939637   48243 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0723 14:49:16.939646   48243 command_runner.go:130] > # default_runtime = "runc"
	I0723 14:49:16.939657   48243 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0723 14:49:16.939671   48243 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0723 14:49:16.939683   48243 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0723 14:49:16.939693   48243 command_runner.go:130] > # creation as a file is not desired either.
	I0723 14:49:16.939709   48243 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0723 14:49:16.939719   48243 command_runner.go:130] > # the hostname is being managed dynamically.
	I0723 14:49:16.939729   48243 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0723 14:49:16.939737   48243 command_runner.go:130] > # ]
	I0723 14:49:16.939750   48243 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0723 14:49:16.939760   48243 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0723 14:49:16.939768   48243 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0723 14:49:16.939777   48243 command_runner.go:130] > # Each entry in the table should follow the format:
	I0723 14:49:16.939785   48243 command_runner.go:130] > #
	I0723 14:49:16.939793   48243 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0723 14:49:16.939804   48243 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0723 14:49:16.939860   48243 command_runner.go:130] > # runtime_type = "oci"
	I0723 14:49:16.939870   48243 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0723 14:49:16.939879   48243 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0723 14:49:16.939888   48243 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0723 14:49:16.939896   48243 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0723 14:49:16.939904   48243 command_runner.go:130] > # monitor_env = []
	I0723 14:49:16.939912   48243 command_runner.go:130] > # privileged_without_host_devices = false
	I0723 14:49:16.939927   48243 command_runner.go:130] > # allowed_annotations = []
	I0723 14:49:16.939935   48243 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0723 14:49:16.939939   48243 command_runner.go:130] > # Where:
	I0723 14:49:16.939946   48243 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0723 14:49:16.939959   48243 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0723 14:49:16.939972   48243 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0723 14:49:16.939984   48243 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0723 14:49:16.939990   48243 command_runner.go:130] > #   in $PATH.
	I0723 14:49:16.940003   48243 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0723 14:49:16.940013   48243 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0723 14:49:16.940019   48243 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0723 14:49:16.940025   48243 command_runner.go:130] > #   state.
	I0723 14:49:16.940035   48243 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0723 14:49:16.940047   48243 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0723 14:49:16.940060   48243 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0723 14:49:16.940071   48243 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0723 14:49:16.940081   48243 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0723 14:49:16.940094   48243 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0723 14:49:16.940102   48243 command_runner.go:130] > #   The currently recognized values are:
	I0723 14:49:16.940109   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0723 14:49:16.940128   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0723 14:49:16.940140   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0723 14:49:16.940152   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0723 14:49:16.940166   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0723 14:49:16.940178   48243 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0723 14:49:16.940189   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0723 14:49:16.940200   48243 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0723 14:49:16.940211   48243 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0723 14:49:16.940224   48243 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0723 14:49:16.940234   48243 command_runner.go:130] > #   deprecated option "conmon".
	I0723 14:49:16.940246   48243 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0723 14:49:16.940257   48243 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0723 14:49:16.940270   48243 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0723 14:49:16.940276   48243 command_runner.go:130] > #   should be moved to the container's cgroup
	I0723 14:49:16.940287   48243 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0723 14:49:16.940297   48243 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0723 14:49:16.940316   48243 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0723 14:49:16.940331   48243 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0723 14:49:16.940339   48243 command_runner.go:130] > #
	I0723 14:49:16.940349   48243 command_runner.go:130] > # Using the seccomp notifier feature:
	I0723 14:49:16.940356   48243 command_runner.go:130] > #
	I0723 14:49:16.940362   48243 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0723 14:49:16.940373   48243 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0723 14:49:16.940381   48243 command_runner.go:130] > #
	I0723 14:49:16.940391   48243 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0723 14:49:16.940408   48243 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0723 14:49:16.940416   48243 command_runner.go:130] > #
	I0723 14:49:16.940425   48243 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0723 14:49:16.940434   48243 command_runner.go:130] > # feature.
	I0723 14:49:16.940438   48243 command_runner.go:130] > #
	I0723 14:49:16.940448   48243 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0723 14:49:16.940458   48243 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0723 14:49:16.940471   48243 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0723 14:49:16.940483   48243 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0723 14:49:16.940495   48243 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0723 14:49:16.940503   48243 command_runner.go:130] > #
	I0723 14:49:16.940512   48243 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0723 14:49:16.940524   48243 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0723 14:49:16.940530   48243 command_runner.go:130] > #
	I0723 14:49:16.940536   48243 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0723 14:49:16.940546   48243 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0723 14:49:16.940554   48243 command_runner.go:130] > #
	I0723 14:49:16.940564   48243 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0723 14:49:16.940575   48243 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0723 14:49:16.940584   48243 command_runner.go:130] > # limitation.
	I0723 14:49:16.940593   48243 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0723 14:49:16.940602   48243 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0723 14:49:16.940608   48243 command_runner.go:130] > runtime_type = "oci"
	I0723 14:49:16.940615   48243 command_runner.go:130] > runtime_root = "/run/runc"
	I0723 14:49:16.940619   48243 command_runner.go:130] > runtime_config_path = ""
	I0723 14:49:16.940628   48243 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0723 14:49:16.940637   48243 command_runner.go:130] > monitor_cgroup = "pod"
	I0723 14:49:16.940650   48243 command_runner.go:130] > monitor_exec_cgroup = ""
	I0723 14:49:16.940658   48243 command_runner.go:130] > monitor_env = [
	I0723 14:49:16.940668   48243 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0723 14:49:16.940676   48243 command_runner.go:130] > ]
	I0723 14:49:16.940684   48243 command_runner.go:130] > privileged_without_host_devices = false
	I0723 14:49:16.940697   48243 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0723 14:49:16.940704   48243 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0723 14:49:16.940713   48243 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0723 14:49:16.940727   48243 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0723 14:49:16.940742   48243 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0723 14:49:16.940753   48243 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0723 14:49:16.940768   48243 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0723 14:49:16.940782   48243 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0723 14:49:16.940789   48243 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0723 14:49:16.940799   48243 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0723 14:49:16.940807   48243 command_runner.go:130] > # Example:
	I0723 14:49:16.940815   48243 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0723 14:49:16.940823   48243 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0723 14:49:16.940830   48243 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0723 14:49:16.940838   48243 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0723 14:49:16.940843   48243 command_runner.go:130] > # cpuset = 0
	I0723 14:49:16.940848   48243 command_runner.go:130] > # cpushares = "0-1"
	I0723 14:49:16.940853   48243 command_runner.go:130] > # Where:
	I0723 14:49:16.940860   48243 command_runner.go:130] > # The workload name is workload-type.
	I0723 14:49:16.940869   48243 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0723 14:49:16.940874   48243 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0723 14:49:16.940879   48243 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0723 14:49:16.940890   48243 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0723 14:49:16.940900   48243 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0723 14:49:16.940907   48243 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0723 14:49:16.940917   48243 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0723 14:49:16.940924   48243 command_runner.go:130] > # Default value is set to true
	I0723 14:49:16.940931   48243 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0723 14:49:16.940939   48243 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0723 14:49:16.940947   48243 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0723 14:49:16.940952   48243 command_runner.go:130] > # Default value is set to 'false'
	I0723 14:49:16.940961   48243 command_runner.go:130] > # disable_hostport_mapping = false
	I0723 14:49:16.940971   48243 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0723 14:49:16.940976   48243 command_runner.go:130] > #
	I0723 14:49:16.940985   48243 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0723 14:49:16.940994   48243 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0723 14:49:16.941004   48243 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0723 14:49:16.941013   48243 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0723 14:49:16.941022   48243 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0723 14:49:16.941027   48243 command_runner.go:130] > [crio.image]
	I0723 14:49:16.941038   48243 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0723 14:49:16.941043   48243 command_runner.go:130] > # default_transport = "docker://"
	I0723 14:49:16.941050   48243 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0723 14:49:16.941056   48243 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0723 14:49:16.941062   48243 command_runner.go:130] > # global_auth_file = ""
	I0723 14:49:16.941067   48243 command_runner.go:130] > # The image used to instantiate infra containers.
	I0723 14:49:16.941077   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.941086   48243 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0723 14:49:16.941099   48243 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0723 14:49:16.941111   48243 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0723 14:49:16.941119   48243 command_runner.go:130] > # This option supports live configuration reload.
	I0723 14:49:16.941129   48243 command_runner.go:130] > # pause_image_auth_file = ""
	I0723 14:49:16.941137   48243 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0723 14:49:16.941148   48243 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0723 14:49:16.941155   48243 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0723 14:49:16.941160   48243 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0723 14:49:16.941167   48243 command_runner.go:130] > # pause_command = "/pause"
	I0723 14:49:16.941172   48243 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0723 14:49:16.941180   48243 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0723 14:49:16.941194   48243 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0723 14:49:16.941204   48243 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0723 14:49:16.941210   48243 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0723 14:49:16.941221   48243 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0723 14:49:16.941230   48243 command_runner.go:130] > # pinned_images = [
	I0723 14:49:16.941235   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941247   48243 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0723 14:49:16.941260   48243 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0723 14:49:16.941278   48243 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0723 14:49:16.941290   48243 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0723 14:49:16.941298   48243 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0723 14:49:16.941302   48243 command_runner.go:130] > # signature_policy = ""
	I0723 14:49:16.941307   48243 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0723 14:49:16.941316   48243 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0723 14:49:16.941321   48243 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0723 14:49:16.941332   48243 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0723 14:49:16.941338   48243 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0723 14:49:16.941344   48243 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0723 14:49:16.941350   48243 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0723 14:49:16.941358   48243 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0723 14:49:16.941362   48243 command_runner.go:130] > # changing them here.
	I0723 14:49:16.941366   48243 command_runner.go:130] > # insecure_registries = [
	I0723 14:49:16.941371   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941377   48243 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0723 14:49:16.941383   48243 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0723 14:49:16.941387   48243 command_runner.go:130] > # image_volumes = "mkdir"
	I0723 14:49:16.941395   48243 command_runner.go:130] > # Temporary directory to use for storing big files
	I0723 14:49:16.941399   48243 command_runner.go:130] > # big_files_temporary_dir = ""
	I0723 14:49:16.941405   48243 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0723 14:49:16.941410   48243 command_runner.go:130] > # CNI plugins.
	I0723 14:49:16.941414   48243 command_runner.go:130] > [crio.network]
	I0723 14:49:16.941419   48243 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0723 14:49:16.941426   48243 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0723 14:49:16.941431   48243 command_runner.go:130] > # cni_default_network = ""
	I0723 14:49:16.941437   48243 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0723 14:49:16.941441   48243 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0723 14:49:16.941449   48243 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0723 14:49:16.941458   48243 command_runner.go:130] > # plugin_dirs = [
	I0723 14:49:16.941464   48243 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0723 14:49:16.941471   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941476   48243 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0723 14:49:16.941482   48243 command_runner.go:130] > [crio.metrics]
	I0723 14:49:16.941486   48243 command_runner.go:130] > # Globally enable or disable metrics support.
	I0723 14:49:16.941489   48243 command_runner.go:130] > enable_metrics = true
	I0723 14:49:16.941498   48243 command_runner.go:130] > # Specify enabled metrics collectors.
	I0723 14:49:16.941505   48243 command_runner.go:130] > # Per default all metrics are enabled.
	I0723 14:49:16.941510   48243 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0723 14:49:16.941518   48243 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0723 14:49:16.941526   48243 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0723 14:49:16.941532   48243 command_runner.go:130] > # metrics_collectors = [
	I0723 14:49:16.941536   48243 command_runner.go:130] > # 	"operations",
	I0723 14:49:16.941540   48243 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0723 14:49:16.941546   48243 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0723 14:49:16.941550   48243 command_runner.go:130] > # 	"operations_errors",
	I0723 14:49:16.941555   48243 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0723 14:49:16.941558   48243 command_runner.go:130] > # 	"image_pulls_by_name",
	I0723 14:49:16.941564   48243 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0723 14:49:16.941568   48243 command_runner.go:130] > # 	"image_pulls_failures",
	I0723 14:49:16.941576   48243 command_runner.go:130] > # 	"image_pulls_successes",
	I0723 14:49:16.941580   48243 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0723 14:49:16.941584   48243 command_runner.go:130] > # 	"image_layer_reuse",
	I0723 14:49:16.941588   48243 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0723 14:49:16.941592   48243 command_runner.go:130] > # 	"containers_oom_total",
	I0723 14:49:16.941596   48243 command_runner.go:130] > # 	"containers_oom",
	I0723 14:49:16.941599   48243 command_runner.go:130] > # 	"processes_defunct",
	I0723 14:49:16.941603   48243 command_runner.go:130] > # 	"operations_total",
	I0723 14:49:16.941607   48243 command_runner.go:130] > # 	"operations_latency_seconds",
	I0723 14:49:16.941611   48243 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0723 14:49:16.941618   48243 command_runner.go:130] > # 	"operations_errors_total",
	I0723 14:49:16.941622   48243 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0723 14:49:16.941627   48243 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0723 14:49:16.941631   48243 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0723 14:49:16.941637   48243 command_runner.go:130] > # 	"image_pulls_success_total",
	I0723 14:49:16.941641   48243 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0723 14:49:16.941645   48243 command_runner.go:130] > # 	"containers_oom_count_total",
	I0723 14:49:16.941649   48243 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0723 14:49:16.941656   48243 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0723 14:49:16.941659   48243 command_runner.go:130] > # ]
	I0723 14:49:16.941664   48243 command_runner.go:130] > # The port on which the metrics server will listen.
	I0723 14:49:16.941669   48243 command_runner.go:130] > # metrics_port = 9090
	I0723 14:49:16.941680   48243 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0723 14:49:16.941685   48243 command_runner.go:130] > # metrics_socket = ""
	I0723 14:49:16.941690   48243 command_runner.go:130] > # The certificate for the secure metrics server.
	I0723 14:49:16.941697   48243 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0723 14:49:16.941703   48243 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0723 14:49:16.941709   48243 command_runner.go:130] > # certificate on any modification event.
	I0723 14:49:16.941713   48243 command_runner.go:130] > # metrics_cert = ""
	I0723 14:49:16.941720   48243 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0723 14:49:16.941727   48243 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0723 14:49:16.941736   48243 command_runner.go:130] > # metrics_key = ""
	I0723 14:49:16.941743   48243 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0723 14:49:16.941752   48243 command_runner.go:130] > [crio.tracing]
	I0723 14:49:16.941766   48243 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0723 14:49:16.941772   48243 command_runner.go:130] > # enable_tracing = false
	I0723 14:49:16.941779   48243 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0723 14:49:16.941786   48243 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0723 14:49:16.941796   48243 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0723 14:49:16.941807   48243 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0723 14:49:16.941815   48243 command_runner.go:130] > # CRI-O NRI configuration.
	I0723 14:49:16.941823   48243 command_runner.go:130] > [crio.nri]
	I0723 14:49:16.941838   48243 command_runner.go:130] > # Globally enable or disable NRI.
	I0723 14:49:16.941847   48243 command_runner.go:130] > # enable_nri = false
	I0723 14:49:16.941855   48243 command_runner.go:130] > # NRI socket to listen on.
	I0723 14:49:16.941864   48243 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0723 14:49:16.941871   48243 command_runner.go:130] > # NRI plugin directory to use.
	I0723 14:49:16.941882   48243 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0723 14:49:16.941891   48243 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0723 14:49:16.941902   48243 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0723 14:49:16.941912   48243 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0723 14:49:16.941922   48243 command_runner.go:130] > # nri_disable_connections = false
	I0723 14:49:16.941933   48243 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0723 14:49:16.941944   48243 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0723 14:49:16.941952   48243 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0723 14:49:16.941960   48243 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0723 14:49:16.941972   48243 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0723 14:49:16.941982   48243 command_runner.go:130] > [crio.stats]
	I0723 14:49:16.942006   48243 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0723 14:49:16.942017   48243 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0723 14:49:16.942024   48243 command_runner.go:130] > # stats_collection_period = 0
	I0723 14:49:16.942066   48243 command_runner.go:130] ! time="2024-07-23 14:49:16.896508711Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0723 14:49:16.942089   48243 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0723 14:49:16.942249   48243 cni.go:84] Creating CNI manager for ""
	I0723 14:49:16.942263   48243 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0723 14:49:16.942279   48243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:49:16.942306   48243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-574866 NodeName:multinode-574866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:49:16.942493   48243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-574866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:49:16.942579   48243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:49:16.951900   48243 command_runner.go:130] > kubeadm
	I0723 14:49:16.951918   48243 command_runner.go:130] > kubectl
	I0723 14:49:16.951923   48243 command_runner.go:130] > kubelet
	I0723 14:49:16.951941   48243 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:49:16.951997   48243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 14:49:16.960592   48243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0723 14:49:16.976511   48243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:49:16.993957   48243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0723 14:49:17.008853   48243 ssh_runner.go:195] Run: grep 192.168.39.146	control-plane.minikube.internal$ /etc/hosts
	I0723 14:49:17.012333   48243 command_runner.go:130] > 192.168.39.146	control-plane.minikube.internal
	I0723 14:49:17.012412   48243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:49:17.153458   48243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:49:17.168208   48243 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866 for IP: 192.168.39.146
	I0723 14:49:17.168239   48243 certs.go:194] generating shared ca certs ...
	I0723 14:49:17.168261   48243 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:49:17.168458   48243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 14:49:17.168498   48243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 14:49:17.168509   48243 certs.go:256] generating profile certs ...
	I0723 14:49:17.168592   48243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/client.key
	I0723 14:49:17.168659   48243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.key.21b56dd9
	I0723 14:49:17.168693   48243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.key
	I0723 14:49:17.168704   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0723 14:49:17.168721   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0723 14:49:17.168733   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0723 14:49:17.168745   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0723 14:49:17.168754   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0723 14:49:17.168766   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0723 14:49:17.168778   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0723 14:49:17.168793   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0723 14:49:17.168845   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 14:49:17.168874   48243 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 14:49:17.168883   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:49:17.168910   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 14:49:17.168930   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:49:17.168952   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 14:49:17.168995   48243 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 14:49:17.169027   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.169041   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.169054   48243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem -> /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.169679   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:49:17.192158   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 14:49:17.214246   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:49:17.236439   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 14:49:17.259704   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 14:49:17.282268   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 14:49:17.304864   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:49:17.326241   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/multinode-574866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 14:49:17.347480   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 14:49:17.369796   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:49:17.393320   48243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 14:49:17.416250   48243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:49:17.431896   48243 ssh_runner.go:195] Run: openssl version
	I0723 14:49:17.437552   48243 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0723 14:49:17.437633   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 14:49:17.447948   48243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.452659   48243 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.452732   48243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.452787   48243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 14:49:17.458036   48243 command_runner.go:130] > 3ec20f2e
	I0723 14:49:17.458104   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 14:49:17.467129   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:49:17.477149   48243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.481537   48243 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.481634   48243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.481694   48243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:49:17.487085   48243 command_runner.go:130] > b5213941
	I0723 14:49:17.487163   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:49:17.519490   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 14:49:17.529979   48243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.534063   48243 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.534091   48243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.534136   48243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 14:49:17.539455   48243 command_runner.go:130] > 51391683
	I0723 14:49:17.539599   48243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 14:49:17.548547   48243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:49:17.553088   48243 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:49:17.553109   48243 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0723 14:49:17.553117   48243 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0723 14:49:17.553126   48243 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0723 14:49:17.553135   48243 command_runner.go:130] > Access: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553143   48243 command_runner.go:130] > Modify: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553152   48243 command_runner.go:130] > Change: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553163   48243 command_runner.go:130] >  Birth: 2024-07-23 14:42:29.582952522 +0000
	I0723 14:49:17.553210   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 14:49:17.558270   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.558447   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 14:49:17.563533   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.563588   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 14:49:17.568752   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.568812   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 14:49:17.573707   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.573847   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 14:49:17.578727   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.578993   48243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 14:49:17.583983   48243 command_runner.go:130] > Certificate will not expire
	I0723 14:49:17.584053   48243 kubeadm.go:392] StartCluster: {Name:multinode-574866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-574866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:49:17.584153   48243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:49:17.584188   48243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:49:17.620436   48243 command_runner.go:130] > 4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b
	I0723 14:49:17.620471   48243 command_runner.go:130] > a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346
	I0723 14:49:17.620481   48243 command_runner.go:130] > 4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d
	I0723 14:49:17.620493   48243 command_runner.go:130] > ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272
	I0723 14:49:17.620502   48243 command_runner.go:130] > 3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a
	I0723 14:49:17.620511   48243 command_runner.go:130] > be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5
	I0723 14:49:17.620521   48243 command_runner.go:130] > 5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051
	I0723 14:49:17.620530   48243 command_runner.go:130] > 905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65
	I0723 14:49:17.620553   48243 cri.go:89] found id: "4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b"
	I0723 14:49:17.620566   48243 cri.go:89] found id: "a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346"
	I0723 14:49:17.620569   48243 cri.go:89] found id: "4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d"
	I0723 14:49:17.620573   48243 cri.go:89] found id: "ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272"
	I0723 14:49:17.620576   48243 cri.go:89] found id: "3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a"
	I0723 14:49:17.620579   48243 cri.go:89] found id: "be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5"
	I0723 14:49:17.620581   48243 cri.go:89] found id: "5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051"
	I0723 14:49:17.620583   48243 cri.go:89] found id: "905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65"
	I0723 14:49:17.620585   48243 cri.go:89] found id: ""
	I0723 14:49:17.620623   48243 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.897273640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746405897254028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3aed01e4-51a0-42ce-8c08-c7039f4bdae1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.897965096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65fc7e36-7e92-419e-b5fd-561449802fd3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.898021892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65fc7e36-7e92-419e-b5fd-561449802fd3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.898351012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65fc7e36-7e92-419e-b5fd-561449802fd3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.936581527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09d2c997-03cf-464d-9ed1-a274d10b8d88 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.936671584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09d2c997-03cf-464d-9ed1-a274d10b8d88 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.937779718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c99967a8-be74-4766-bbb0-101437c81d34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.938188908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746405938163148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c99967a8-be74-4766-bbb0-101437c81d34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.938642745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a22e717-8863-4944-aecd-4927d853604d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.938695300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a22e717-8863-4944-aecd-4927d853604d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.939070395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a22e717-8863-4944-aecd-4927d853604d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.977515587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4060954-26b4-4699-b4f6-2a5c7383b9f8 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.977594114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4060954-26b4-4699-b4f6-2a5c7383b9f8 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.978699905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ecc551f-9612-46a7-89c0-8993507f8efc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.979371176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746405979320328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ecc551f-9612-46a7-89c0-8993507f8efc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.979863035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7e7c13f-8c97-4df2-a57f-71d9f619e66f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.979914111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7e7c13f-8c97-4df2-a57f-71d9f619e66f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:25 multinode-574866 crio[2866]: time="2024-07-23 14:53:25.980244943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7e7c13f-8c97-4df2-a57f-71d9f619e66f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:26 multinode-574866 crio[2866]: time="2024-07-23 14:53:26.018420517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8c333d8-0518-4641-aed8-a0fea5361e72 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:53:26 multinode-574866 crio[2866]: time="2024-07-23 14:53:26.018537935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8c333d8-0518-4641-aed8-a0fea5361e72 name=/runtime.v1.RuntimeService/Version
	Jul 23 14:53:26 multinode-574866 crio[2866]: time="2024-07-23 14:53:26.019694742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b495568f-f4a8-4f7c-ba6b-25106609b9a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:53:26 multinode-574866 crio[2866]: time="2024-07-23 14:53:26.020126611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721746406020103090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b495568f-f4a8-4f7c-ba6b-25106609b9a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 14:53:26 multinode-574866 crio[2866]: time="2024-07-23 14:53:26.020611172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e223a5ca-9936-4edf-9ffe-884cc1a01750 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:26 multinode-574866 crio[2866]: time="2024-07-23 14:53:26.020684843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e223a5ca-9936-4edf-9ffe-884cc1a01750 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 14:53:26 multinode-574866 crio[2866]: time="2024-07-23 14:53:26.021041188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14ff66a46fbb2340833e84c04e781354550aff06a2cef922396149bff4b7d768,PodSandboxId:dd908861199eeeee7cd0ec26b5eac4a0bae78e924eb3b5fab3496b8f540d6991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721746198637014415,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44,PodSandboxId:2a5fa619122b1a508febb674fd7c01287add060474b6854456ead74a686f2b68,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721746165160662281,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363,PodSandboxId:53af495fc761d6c0b9d694655469cc4f831c4e8d9f604243b401f224cade9903,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721746165100689391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b,PodSandboxId:7e702c094efa76d7af447136d01d4a5967ba29dcfc57abdac1cd5806d261db7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721746164979217124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]
string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38848a12257cd9ae4ae75a9ddea715d66523997b47b08af87c9847d01f2149d,PodSandboxId:6ff1a483be3d0bc24ad62f30c5ac0e7167767e20ff9198beb5cb6804f5c83448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721746164904874552,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kub
ernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d,PodSandboxId:7de7d300aaef88a99400663bb81c517b1f644bc67a2bc56c824140548ccba289,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721746160122658614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2141edb0,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683,PodSandboxId:097958c3d04d01329013df97d6d5e4ff5e74e9f7880798325bbe600868b88072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721746160119094708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448,PodSandboxId:9f53eb965c5d83b8daf3b7de44c46089c0c6754c208cc82a816a1cce4eeb1548,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721746160047615304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a03066490a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634,PodSandboxId:d1e779749e473bbc32de14e8c6fc92aa569d47bf811150a9352376c088df7797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721746160002984284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd185490625c0771ce32ac2b6f5a41f80f2e3cc23e2089864db95ffb96a837c,PodSandboxId:c2131ed8bfd32ec3dafece4c2166f3144d72e56a139f04fb18e2a2e1618d2463,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721745842123283022,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q96vx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac55b5a2-2f09-4441-8dc7-a80407abaa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 5352c32e,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b,PodSandboxId:ebeea502a99cc46bbf4275c2ea317137e656f21e9e638d512d0ef7ed7f3737d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721745788359382008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8k97t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea62019-9fa6-4ea4-a7ce-1d6990cdc646,},Annotations:map[string]string{io.kubernetes.container.hash: 66dbf2c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87ecdc695287361ac5a011a27d19c2dee680bc5a846ee2815aab0e94f6dd346,PodSandboxId:c4903b55b1a75b7e91339d6405b340c56d083c9a7fba48148aeb07eb713fe536,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721745788322617434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e769cd6-3fa7-4db4-843c-55ad566c6caf,},Annotations:map[string]string{io.kubernetes.container.hash: a87999fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d,PodSandboxId:c6cea513ac543fd958f0c675f4f1cc1cf60d291b651ca2659044e46abfee13b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721745776743075867,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2j56b,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 196eb952-ce8f-4cb8-aadf-c62bdfb1375e,},Annotations:map[string]string{io.kubernetes.container.hash: 227a9a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272,PodSandboxId:8bdea4fd24e095040991cc59951cad92d6e512ff17a61ff114fe4b122543566f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721745773137669858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xzc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fff83ebe-fe7c-4699-94af-849be3c3f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: dc6c19b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a,PodSandboxId:e9816133c40d87a3cfcaab10604f776a348c373605f0f10288088b8d030bb064,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721745754267976276,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
29a23d18a3cf7abdb5a95b93ad2417,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5,PodSandboxId:776b440632be773f32185503dfebd5a3283019a973d0d1b4df501b04327bbf85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721745754259284359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee14aac6a0306649
0a636a81bfb581a,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051,PodSandboxId:367e4b5ce253ef0349c4da0f9ecb330da58f00ce71d0eabd78a42e7fbf97bc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721745754217977998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300a2557c65c82218b67d744c402a1d6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2141edb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65,PodSandboxId:30c5a92fbce2b6aa79149ddd23ea581ba97a43d50d17ed7ed9acc37aff073ce4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721745754209530721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-574866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdaacaf0cb51609c06244161bec37ce,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e223a5ca-9936-4edf-9ffe-884cc1a01750 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	14ff66a46fbb2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   dd908861199ee       busybox-fc5497c4f-q96vx
	ffb4c63fdfc46       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   2a5fa619122b1       kindnet-2j56b
	3688b0a09531f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   53af495fc761d       coredns-7db6d8ff4d-8k97t
	8c48fef80c116       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   7e702c094efa7       kube-proxy-6xzc9
	b38848a12257c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   6ff1a483be3d0       storage-provisioner
	f235b9cfb7b3e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   7de7d300aaef8       etcd-multinode-574866
	4767b5c9840d6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   097958c3d04d0       kube-controller-manager-multinode-574866
	edd34678a4337       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   9f53eb965c5d8       kube-apiserver-multinode-574866
	b69c6488bcfdc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   d1e779749e473       kube-scheduler-multinode-574866
	7cd185490625c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   c2131ed8bfd32       busybox-fc5497c4f-q96vx
	4e595d9996574       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   ebeea502a99cc       coredns-7db6d8ff4d-8k97t
	a87ecdc695287       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c4903b55b1a75       storage-provisioner
	4442a162f2430       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   c6cea513ac543       kindnet-2j56b
	ebf4f61fb738d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   8bdea4fd24e09       kube-proxy-6xzc9
	3140b73105eba       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   e9816133c40d8       kube-scheduler-multinode-574866
	be7075af99a3f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   776b440632be7       kube-apiserver-multinode-574866
	5f7c7a4d6150a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   367e4b5ce253e       etcd-multinode-574866
	905cbfc74b196       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   30c5a92fbce2b       kube-controller-manager-multinode-574866
	
	
	==> coredns [3688b0a09531f35aa6dbbd97d9904c544df2d2dde92d9d26f1ad9a8649dae363] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47243 - 24721 "HINFO IN 8108853635571185609.4362304376288203997. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014332231s
	
	
	==> coredns [4e595d99965746afabb0132501de66be96b3a2cbb40a810518145e71ca776f4b] <==
	[INFO] 10.244.0.3:53170 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002160976s
	[INFO] 10.244.0.3:59966 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090345s
	[INFO] 10.244.0.3:36811 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000513289s
	[INFO] 10.244.0.3:56612 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319214s
	[INFO] 10.244.0.3:54315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067994s
	[INFO] 10.244.0.3:56766 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006233s
	[INFO] 10.244.0.3:43826 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064496s
	[INFO] 10.244.1.2:55191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098956s
	[INFO] 10.244.1.2:50947 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067791s
	[INFO] 10.244.1.2:38966 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064323s
	[INFO] 10.244.1.2:40157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063439s
	[INFO] 10.244.0.3:48325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120507s
	[INFO] 10.244.0.3:55380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074611s
	[INFO] 10.244.0.3:51387 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065366s
	[INFO] 10.244.0.3:44042 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096288s
	[INFO] 10.244.1.2:54659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105366s
	[INFO] 10.244.1.2:59628 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000333846s
	[INFO] 10.244.1.2:33961 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000269519s
	[INFO] 10.244.1.2:41107 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134656s
	[INFO] 10.244.0.3:51347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174612s
	[INFO] 10.244.0.3:37425 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000045024s
	[INFO] 10.244.0.3:58196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000040542s
	[INFO] 10.244.0.3:43409 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059692s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-574866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-574866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=multinode-574866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_42_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:42:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-574866
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:53:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:49:23 +0000   Tue, 23 Jul 2024 14:43:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    multinode-574866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df01da1e1441481fba781beb810260b5
	  System UUID:                df01da1e-1441-481f-ba78-1beb810260b5
	  Boot ID:                    02842110-16cf-4fac-a5da-39b8dc15ce57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q96vx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 coredns-7db6d8ff4d-8k97t                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-574866                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-2j56b                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-574866             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-574866    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-6xzc9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-574866             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-574866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-574866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-574866 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-574866 event: Registered Node multinode-574866 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-574866 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-574866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-574866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-574866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                node-controller  Node multinode-574866 event: Registered Node multinode-574866 in Controller
	
	
	Name:               multinode-574866-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-574866-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=multinode-574866
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_23T14_50_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:50:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-574866-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:51:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:51:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:51:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:51:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 23 Jul 2024 14:50:32 +0000   Tue, 23 Jul 2024 14:51:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    multinode-574866-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2d499d9beff42548473cad134041789
	  System UUID:                d2d499d9-beff-4254-8473-cad134041789
	  Boot ID:                    439afd0a-38f6-4b9e-b0fe-5419c938af12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ztnd7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-xndsk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m50s
	  kube-system                 kube-proxy-jms7l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m50s (x2 over 9m50s)  kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m50s (x2 over 9m50s)  kubelet          Node multinode-574866-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m50s (x2 over 9m50s)  kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m50s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m30s                  kubelet          Node multinode-574866-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-574866-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-574866-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m20s                  node-controller  Node multinode-574866-m02 event: Registered Node multinode-574866-m02 in Controller
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-574866-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-574866-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059608] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.178904] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.120684] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.244065] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +3.859098] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.995628] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.057810] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.975960] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.085325] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.177202] systemd-fstab-generator[1460]: Ignoring "noauto" option for root device
	[  +0.102282] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.026107] kauditd_printk_skb: 56 callbacks suppressed
	[Jul23 14:43] kauditd_printk_skb: 12 callbacks suppressed
	[Jul23 14:49] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.141771] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.164756] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.142324] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.266428] systemd-fstab-generator[2848]: Ignoring "noauto" option for root device
	[  +0.984005] systemd-fstab-generator[2948]: Ignoring "noauto" option for root device
	[  +2.075961] systemd-fstab-generator[3073]: Ignoring "noauto" option for root device
	[  +5.711528] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.043827] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.830922] systemd-fstab-generator[3907]: Ignoring "noauto" option for root device
	[ +20.879674] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [5f7c7a4d6150a0f87da93c31e58099a968acca26b0785b7afb75d0d1d2327051] <==
	{"level":"info","ts":"2024-07-23T14:42:34.616251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.146:2379"}
	{"level":"info","ts":"2024-07-23T14:42:34.616367Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:42:34.620495Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:42:34.62054Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:42:34.622078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:43:36.909794Z","caller":"traceutil/trace.go:171","msg":"trace[1160322536] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"226.957657ms","start":"2024-07-23T14:43:36.6828Z","end":"2024-07-23T14:43:36.909758Z","steps":["trace[1160322536] 'process raft request'  (duration: 150.914185ms)","trace[1160322536] 'compare'  (duration: 75.897367ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:43:36.91173Z","caller":"traceutil/trace.go:171","msg":"trace[110772945] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"175.145523ms","start":"2024-07-23T14:43:36.736569Z","end":"2024-07-23T14:43:36.911714Z","steps":["trace[110772945] 'process raft request'  (duration: 174.95893ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:43:40.77904Z","caller":"traceutil/trace.go:171","msg":"trace[2130986330] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"110.789592ms","start":"2024-07-23T14:43:40.668235Z","end":"2024-07-23T14:43:40.779024Z","steps":["trace[2130986330] 'process raft request'  (duration: 110.649729ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:43:45.036357Z","caller":"traceutil/trace.go:171","msg":"trace[208837338] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"239.591556ms","start":"2024-07-23T14:43:44.796741Z","end":"2024-07-23T14:43:45.036332Z","steps":["trace[208837338] 'process raft request'  (duration: 239.137032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:44:30.332905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.430893ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751779267824159558 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-574866-m03.17e4debf2368c8c6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-574866-m03.17e4debf2368c8c6\" value_size:642 lease:8751779267824159188 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-23T14:44:30.333105Z","caller":"traceutil/trace.go:171","msg":"trace[2354266] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:610; }","duration":"134.533302ms","start":"2024-07-23T14:44:30.198542Z","end":"2024-07-23T14:44:30.333075Z","steps":["trace[2354266] 'read index received'  (duration: 133.879423ms)","trace[2354266] 'applied index is now lower than readState.Index'  (duration: 653.091µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T14:44:30.333192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.645181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-574866-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-23T14:44:30.333224Z","caller":"traceutil/trace.go:171","msg":"trace[2001289915] range","detail":"{range_begin:/registry/minions/multinode-574866-m03; range_end:; response_count:1; response_revision:573; }","duration":"134.700349ms","start":"2024-07-23T14:44:30.198515Z","end":"2024-07-23T14:44:30.333215Z","steps":["trace[2001289915] 'agreement among raft nodes before linearized reading'  (duration: 134.630139ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:44:30.333315Z","caller":"traceutil/trace.go:171","msg":"trace[841044821] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"238.960366ms","start":"2024-07-23T14:44:30.09434Z","end":"2024-07-23T14:44:30.3333Z","steps":["trace[841044821] 'process raft request'  (duration: 75.313081ms)","trace[841044821] 'compare'  (duration: 162.243825ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:44:30.33335Z","caller":"traceutil/trace.go:171","msg":"trace[575406015] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"170.172067ms","start":"2024-07-23T14:44:30.163172Z","end":"2024-07-23T14:44:30.333344Z","steps":["trace[575406015] 'process raft request'  (duration: 169.855728ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:47:44.194067Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-23T14:47:44.194192Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-574866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	{"level":"warn","ts":"2024-07-23T14:47:44.194353Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:47:44.194501Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:47:44.24689Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:47:44.247125Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T14:47:44.247533Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fc85001aa37e7974","current-leader-member-id":"fc85001aa37e7974"}
	{"level":"info","ts":"2024-07-23T14:47:44.250547Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:47:44.250753Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:47:44.250801Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-574866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	
	
	==> etcd [f235b9cfb7b3eb9838207f9b4949b8359b2cee228aa23431c1ed4ad9ec06929d] <==
	{"level":"info","ts":"2024-07-23T14:49:20.484021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:49:20.484032Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:49:20.490061Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T14:49:20.492768Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fc85001aa37e7974","initial-advertise-peer-urls":["https://192.168.39.146:2380"],"listen-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.146:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T14:49:20.492869Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T14:49:20.493029Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:49:20.495507Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-23T14:49:20.484424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 switched to configuration voters=(18195949983872481652)"}
	{"level":"info","ts":"2024-07-23T14:49:20.502596Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","added-peer-id":"fc85001aa37e7974","added-peer-peer-urls":["https://192.168.39.146:2380"]}
	{"level":"info","ts":"2024-07-23T14:49:20.502752Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:49:20.502795Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:49:22.33414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T14:49:22.334266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T14:49:22.334343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgPreVoteResp from fc85001aa37e7974 at term 2"}
	{"level":"info","ts":"2024-07-23T14:49:22.334389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.334415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgVoteResp from fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.334506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became leader at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.334535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fc85001aa37e7974 elected leader fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-07-23T14:49:22.339523Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fc85001aa37e7974","local-member-attributes":"{Name:multinode-574866 ClientURLs:[https://192.168.39.146:2379]}","request-path":"/0/members/fc85001aa37e7974/attributes","cluster-id":"25c4f0770a3181de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T14:49:22.33964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:49:22.33967Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:49:22.339805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T14:49:22.340414Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T14:49:22.342337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:49:22.342409Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.146:2379"}
	
	
	==> kernel <==
	 14:53:26 up 11 min,  0 users,  load average: 0.01, 0.12, 0.09
	Linux multinode-574866 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4442a162f2430c61fcf11bab8b98bd7ba636d72f931e9f45fe99f3ff3e11994d] <==
	I0723 14:46:57.685380       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:07.684936       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:07.685052       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:07.685236       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:07.685261       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:07.685325       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:07.685344       1 main.go:299] handling current node
	I0723 14:47:17.692546       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:17.692590       1 main.go:299] handling current node
	I0723 14:47:17.692607       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:17.692613       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:17.692751       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:17.692766       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:27.689956       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:27.690233       1 main.go:299] handling current node
	I0723 14:47:27.690302       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:27.690330       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:27.690665       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:27.690708       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	I0723 14:47:37.690974       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:47:37.691053       1 main.go:299] handling current node
	I0723 14:47:37.691083       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:47:37.691095       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:47:37.691229       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0723 14:47:37.691248       1 main.go:322] Node multinode-574866-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ffb4c63fdfc4601c5c6d4c2a4feab2aa2f3b1c89c8352d394d8ecf7099e33c44] <==
	I0723 14:52:26.082251       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:52:36.086800       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:52:36.086930       1 main.go:299] handling current node
	I0723 14:52:36.086960       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:52:36.086979       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:52:46.090767       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:52:46.090892       1 main.go:299] handling current node
	I0723 14:52:46.090922       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:52:46.090940       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:52:56.090857       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:52:56.090998       1 main.go:299] handling current node
	I0723 14:52:56.091037       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:52:56.091056       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:53:06.090610       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:53:06.090729       1 main.go:299] handling current node
	I0723 14:53:06.090759       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:53:06.090777       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:53:16.091190       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:53:16.091236       1 main.go:299] handling current node
	I0723 14:53:16.091254       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:53:16.091259       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	I0723 14:53:26.081960       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0723 14:53:26.082005       1 main.go:299] handling current node
	I0723 14:53:26.082019       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0723 14:53:26.082025       1 main.go:322] Node multinode-574866-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [be7075af99a3fbf54cbd8ecd1a57a58d830930941f219cd7e811a302168869c5] <==
	W0723 14:47:44.222914       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.222953       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.222989       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223023       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223057       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223099       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223132       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223260       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223300       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223406       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223531       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223592       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223629       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223671       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223823       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223862       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223899       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223934       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.223978       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224022       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224080       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224118       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224159       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224196       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 14:47:44.224247       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [edd34678a4337d6f4639aed11e83f5b5b70984a7dac82fbe90adfcb66397c448] <==
	I0723 14:49:23.611938       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 14:49:23.612583       1 shared_informer.go:320] Caches are synced for configmaps
	I0723 14:49:23.612636       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0723 14:49:23.612643       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0723 14:49:23.622698       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 14:49:23.623274       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0723 14:49:23.629590       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0723 14:49:23.630278       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0723 14:49:23.656005       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 14:49:23.656035       1 aggregator.go:165] initial CRD sync complete...
	I0723 14:49:23.656058       1 autoregister_controller.go:141] Starting autoregister controller
	I0723 14:49:23.656063       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 14:49:23.656068       1 cache.go:39] Caches are synced for autoregister controller
	I0723 14:49:23.678617       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 14:49:23.691099       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 14:49:23.691137       1 policy_source.go:224] refreshing policies
	I0723 14:49:23.695890       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 14:49:24.524998       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0723 14:49:25.663156       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 14:49:25.845310       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0723 14:49:25.867992       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0723 14:49:25.964021       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0723 14:49:25.982783       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0723 14:49:36.638673       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:49:36.844653       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4767b5c9840d6d5333526b546d265f04dc77dca9cfb37157cec88d924e67e683] <==
	I0723 14:50:02.204822       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m02\" does not exist"
	I0723 14:50:02.217691       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m02" podCIDRs=["10.244.1.0/24"]
	I0723 14:50:03.137025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.711µs"
	I0723 14:50:03.166292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.497µs"
	I0723 14:50:03.178647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="214.146µs"
	I0723 14:50:03.182139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.01µs"
	I0723 14:50:03.184099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.68µs"
	I0723 14:50:07.197044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.867µs"
	I0723 14:50:21.011815       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:50:21.033404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.52µs"
	I0723 14:50:21.048964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.366µs"
	I0723 14:50:24.435736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.21663ms"
	I0723 14:50:24.436001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.082µs"
	I0723 14:50:39.268756       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:50:40.282669       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m03\" does not exist"
	I0723 14:50:40.283290       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:50:40.301193       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m03" podCIDRs=["10.244.2.0/24"]
	I0723 14:50:59.611541       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:51:04.877342       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:51:46.554199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.026949ms"
	I0723 14:51:46.554632       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="195.389µs"
	I0723 14:51:56.511099       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-r7rxq"
	I0723 14:51:56.533339       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-r7rxq"
	I0723 14:51:56.533373       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-48s58"
	I0723 14:51:56.554536       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-48s58"
	
	
	==> kube-controller-manager [905cbfc74b1969439844b8c8a9900ead2e919e5dfba34e70bbf84512e04a0d65] <==
	I0723 14:43:36.915234       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m02\" does not exist"
	I0723 14:43:36.928836       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m02" podCIDRs=["10.244.1.0/24"]
	I0723 14:43:37.131369       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-574866-m02"
	I0723 14:43:56.811011       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:43:59.036281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.725752ms"
	I0723 14:43:59.049912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.57051ms"
	I0723 14:43:59.049987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.188µs"
	I0723 14:43:59.055785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.973µs"
	I0723 14:44:02.628974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.553811ms"
	I0723 14:44:02.629153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.834µs"
	I0723 14:44:02.778908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.834605ms"
	I0723 14:44:02.779138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.046µs"
	I0723 14:44:30.335009       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m03\" does not exist"
	I0723 14:44:30.334984       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:44:30.371654       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m03" podCIDRs=["10.244.2.0/24"]
	I0723 14:44:32.153894       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-574866-m03"
	I0723 14:44:49.677987       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:45:18.019631       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:45:19.523926       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-574866-m03\" does not exist"
	I0723 14:45:19.526567       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:45:19.537078       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-574866-m03" podCIDRs=["10.244.3.0/24"]
	I0723 14:45:38.908496       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:46:22.206357       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-574866-m02"
	I0723 14:46:22.275499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.900831ms"
	I0723 14:46:22.276514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.485µs"
	
	
	==> kube-proxy [8c48fef80c1162a81b7ea7e9cb65b9fffbf9bcb4ea4d12654b35b86802a3370b] <==
	I0723 14:49:25.357503       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:49:25.376767       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0723 14:49:25.498557       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:49:25.498600       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:49:25.498617       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:49:25.507583       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:49:25.507813       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:49:25.507826       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:49:25.514226       1 config.go:192] "Starting service config controller"
	I0723 14:49:25.514251       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:49:25.514274       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:49:25.514278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:49:25.519201       1 config.go:319] "Starting node config controller"
	I0723 14:49:25.519225       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:49:25.615603       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:49:25.615737       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:49:25.623520       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ebf4f61fb738dd5e7f99819396bbd64e80342a9b9927679eca3935aafddb2272] <==
	I0723 14:42:53.350793       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:42:53.369193       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0723 14:42:53.458100       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 14:42:53.458161       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 14:42:53.458178       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:42:53.460774       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:42:53.460986       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:42:53.461015       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:42:53.462946       1 config.go:192] "Starting service config controller"
	I0723 14:42:53.463241       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:42:53.463294       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:42:53.463300       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:42:53.464072       1 config.go:319] "Starting node config controller"
	I0723 14:42:53.464095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:42:53.563797       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:42:53.563853       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:42:53.564133       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3140b73105eba0cdc9447dc0f36a96c430a6a499d3d31bc274ace2ed4faa409a] <==
	E0723 14:42:36.518546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:42:36.518528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 14:42:36.518600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 14:42:37.361698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 14:42:37.361768       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 14:42:37.447210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.447514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.457302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.457342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.462152       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:42:37.462268       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:42:37.471051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.471167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.472312       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:42:37.472343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:42:37.609610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 14:42:37.609753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:42:37.685212       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:42:37.685263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:42:37.726671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 14:42:37.726746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 14:42:37.739998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0723 14:42:37.740098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0723 14:42:40.612016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0723 14:47:44.196360       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b69c6488bcfdcaaf38a024b62401bf50c18a88afa71a21ecf6cf86c747e4d634] <==
	I0723 14:49:21.166587       1 serving.go:380] Generated self-signed cert in-memory
	W0723 14:49:23.605028       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 14:49:23.605105       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:49:23.605115       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 14:49:23.605121       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 14:49:23.624825       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 14:49:23.624972       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:49:23.633722       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 14:49:23.633773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 14:49:23.634124       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:49:23.633794       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 14:49:23.734939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.389418    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e769cd6-3fa7-4db4-843c-55ad566c6caf-tmp\") pod \"storage-provisioner\" (UID: \"3e769cd6-3fa7-4db4-843c-55ad566c6caf\") " pod="kube-system/storage-provisioner"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.389622    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/196eb952-ce8f-4cb8-aadf-c62bdfb1375e-xtables-lock\") pod \"kindnet-2j56b\" (UID: \"196eb952-ce8f-4cb8-aadf-c62bdfb1375e\") " pod="kube-system/kindnet-2j56b"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.390079    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/196eb952-ce8f-4cb8-aadf-c62bdfb1375e-lib-modules\") pod \"kindnet-2j56b\" (UID: \"196eb952-ce8f-4cb8-aadf-c62bdfb1375e\") " pod="kube-system/kindnet-2j56b"
	Jul 23 14:49:24 multinode-574866 kubelet[3080]: I0723 14:49:24.390238    3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fff83ebe-fe7c-4699-94af-849be3c3f3ee-xtables-lock\") pod \"kube-proxy-6xzc9\" (UID: \"fff83ebe-fe7c-4699-94af-849be3c3f3ee\") " pod="kube-system/kube-proxy-6xzc9"
	Jul 23 14:49:27 multinode-574866 kubelet[3080]: I0723 14:49:27.377758    3080 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 23 14:50:19 multinode-574866 kubelet[3080]: E0723 14:50:19.431959    3080 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:50:19 multinode-574866 kubelet[3080]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:51:19 multinode-574866 kubelet[3080]: E0723 14:51:19.433913    3080 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:51:19 multinode-574866 kubelet[3080]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:51:19 multinode-574866 kubelet[3080]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:51:19 multinode-574866 kubelet[3080]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:51:19 multinode-574866 kubelet[3080]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:52:19 multinode-574866 kubelet[3080]: E0723 14:52:19.433734    3080 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:52:19 multinode-574866 kubelet[3080]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:52:19 multinode-574866 kubelet[3080]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:52:19 multinode-574866 kubelet[3080]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:52:19 multinode-574866 kubelet[3080]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 14:53:19 multinode-574866 kubelet[3080]: E0723 14:53:19.432418    3080 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 14:53:19 multinode-574866 kubelet[3080]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 14:53:19 multinode-574866 kubelet[3080]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 14:53:19 multinode-574866 kubelet[3080]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 14:53:19 multinode-574866 kubelet[3080]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 14:53:25.641380   50128 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19319-11303/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-574866 -n multinode-574866
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-574866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.17s)

                                                
                                    
x
+
TestPreload (331.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-676080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0723 14:57:11.819596   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:59:32.748040   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:59:49.699825   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-676080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m8.697113944s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-676080 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-676080 image pull gcr.io/k8s-minikube/busybox: (2.835629329s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-676080
E0723 15:02:11.818824   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-676080: exit status 82 (2m0.4633501s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-676080"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-676080 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-23 15:02:22.798128531 +0000 UTC m=+3952.003873248
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-676080 -n test-preload-676080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-676080 -n test-preload-676080: exit status 3 (18.54378803s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:02:41.338776   53272 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0723 15:02:41.338795   53272 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-676080" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-676080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-676080
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-676080: (1.119609413s)
--- FAIL: TestPreload (331.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m35.136661018s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-503350] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-503350" primary control-plane node in "kubernetes-upgrade-503350" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:09:08.787909   58358 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:09:08.788187   58358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:09:08.788198   58358 out.go:304] Setting ErrFile to fd 2...
	I0723 15:09:08.788203   58358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:09:08.788421   58358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:09:08.789047   58358 out.go:298] Setting JSON to false
	I0723 15:09:08.790035   58358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6695,"bootTime":1721740654,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:09:08.790097   58358 start.go:139] virtualization: kvm guest
	I0723 15:09:08.791977   58358 out.go:177] * [kubernetes-upgrade-503350] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:09:08.793681   58358 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:09:08.793689   58358 notify.go:220] Checking for updates...
	I0723 15:09:08.796207   58358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:09:08.797509   58358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:09:08.798663   58358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:09:08.799916   58358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:09:08.801275   58358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:09:08.803269   58358 config.go:182] Loaded profile config "cert-expiration-457920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:09:08.803391   58358 config.go:182] Loaded profile config "cert-options-534062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:09:08.803472   58358 config.go:182] Loaded profile config "force-systemd-flag-357935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:09:08.803588   58358 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:09:08.846611   58358 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 15:09:08.848303   58358 start.go:297] selected driver: kvm2
	I0723 15:09:08.848323   58358 start.go:901] validating driver "kvm2" against <nil>
	I0723 15:09:08.848338   58358 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:09:08.849334   58358 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:09:08.849431   58358 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:09:08.865938   58358 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:09:08.865994   58358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 15:09:08.866273   58358 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 15:09:08.866296   58358 cni.go:84] Creating CNI manager for ""
	I0723 15:09:08.866304   58358 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:09:08.866313   58358 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 15:09:08.866369   58358 start.go:340] cluster config:
	{Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:09:08.866491   58358 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:09:08.869040   58358 out.go:177] * Starting "kubernetes-upgrade-503350" primary control-plane node in "kubernetes-upgrade-503350" cluster
	I0723 15:09:08.870427   58358 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:09:08.870468   58358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0723 15:09:08.870487   58358 cache.go:56] Caching tarball of preloaded images
	I0723 15:09:08.870552   58358 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:09:08.870566   58358 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0723 15:09:08.870646   58358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/config.json ...
	I0723 15:09:08.870662   58358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/config.json: {Name:mk9782f4c842fca5fd118f0abf100d67d8ef848e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:08.870807   58358 start.go:360] acquireMachinesLock for kubernetes-upgrade-503350: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:09:15.928595   58358 start.go:364] duration metric: took 7.057739833s to acquireMachinesLock for "kubernetes-upgrade-503350"
	I0723 15:09:15.928659   58358 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:09:15.928775   58358 start.go:125] createHost starting for "" (driver="kvm2")
	I0723 15:09:15.930969   58358 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 15:09:15.931173   58358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:09:15.931222   58358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:09:15.949272   58358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0723 15:09:15.949798   58358 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:09:15.950479   58358 main.go:141] libmachine: Using API Version  1
	I0723 15:09:15.950502   58358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:09:15.950900   58358 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:09:15.951102   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetMachineName
	I0723 15:09:15.951241   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:15.951385   58358 start.go:159] libmachine.API.Create for "kubernetes-upgrade-503350" (driver="kvm2")
	I0723 15:09:15.951413   58358 client.go:168] LocalClient.Create starting
	I0723 15:09:15.951457   58358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 15:09:15.951493   58358 main.go:141] libmachine: Decoding PEM data...
	I0723 15:09:15.951512   58358 main.go:141] libmachine: Parsing certificate...
	I0723 15:09:15.951592   58358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 15:09:15.951622   58358 main.go:141] libmachine: Decoding PEM data...
	I0723 15:09:15.951639   58358 main.go:141] libmachine: Parsing certificate...
	I0723 15:09:15.951670   58358 main.go:141] libmachine: Running pre-create checks...
	I0723 15:09:15.951682   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .PreCreateCheck
	I0723 15:09:15.952045   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetConfigRaw
	I0723 15:09:15.952471   58358 main.go:141] libmachine: Creating machine...
	I0723 15:09:15.952485   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .Create
	I0723 15:09:15.952616   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Creating KVM machine...
	I0723 15:09:15.953874   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found existing default KVM network
	I0723 15:09:15.956435   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:15.956240   59473 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0723 15:09:15.957660   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:15.957575   59473 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:49:1b} reservation:<nil>}
	I0723 15:09:15.959020   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:15.958915   59473 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000153a0}
	I0723 15:09:15.959047   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | created network xml: 
	I0723 15:09:15.959057   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | <network>
	I0723 15:09:15.959068   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |   <name>mk-kubernetes-upgrade-503350</name>
	I0723 15:09:15.959077   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |   <dns enable='no'/>
	I0723 15:09:15.959086   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |   
	I0723 15:09:15.959099   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0723 15:09:15.959111   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |     <dhcp>
	I0723 15:09:15.959129   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0723 15:09:15.959142   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |     </dhcp>
	I0723 15:09:15.959149   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |   </ip>
	I0723 15:09:15.959158   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG |   
	I0723 15:09:15.959165   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | </network>
	I0723 15:09:15.959174   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | 
	I0723 15:09:15.965317   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | trying to create private KVM network mk-kubernetes-upgrade-503350 192.168.61.0/24...
	I0723 15:09:16.049459   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | private KVM network mk-kubernetes-upgrade-503350 192.168.61.0/24 created
	I0723 15:09:16.049495   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350 ...
	I0723 15:09:16.049510   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:16.049432   59473 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:09:16.049598   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 15:09:16.049628   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 15:09:16.336956   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:16.336851   59473 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa...
	I0723 15:09:16.452355   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:16.452248   59473 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/kubernetes-upgrade-503350.rawdisk...
	I0723 15:09:16.452404   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Writing magic tar header
	I0723 15:09:16.452423   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Writing SSH key tar header
	I0723 15:09:16.452436   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:16.452402   59473 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350 ...
	I0723 15:09:16.452606   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350
	I0723 15:09:16.452641   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 15:09:16.452656   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350 (perms=drwx------)
	I0723 15:09:16.452671   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 15:09:16.452680   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 15:09:16.452690   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 15:09:16.452696   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 15:09:16.452703   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 15:09:16.452709   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Creating domain...
	I0723 15:09:16.452770   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:09:16.452795   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 15:09:16.452814   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 15:09:16.452826   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Checking permissions on dir: /home/jenkins
	I0723 15:09:16.452850   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Checking permissions on dir: /home
	I0723 15:09:16.452860   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Skipping /home - not owner
	I0723 15:09:16.454180   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) define libvirt domain using xml: 
	I0723 15:09:16.454199   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) <domain type='kvm'>
	I0723 15:09:16.454210   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   <name>kubernetes-upgrade-503350</name>
	I0723 15:09:16.454218   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   <memory unit='MiB'>2200</memory>
	I0723 15:09:16.454226   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   <vcpu>2</vcpu>
	I0723 15:09:16.454236   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   <features>
	I0723 15:09:16.454244   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <acpi/>
	I0723 15:09:16.454266   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <apic/>
	I0723 15:09:16.454278   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <pae/>
	I0723 15:09:16.454294   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     
	I0723 15:09:16.454303   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   </features>
	I0723 15:09:16.454312   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   <cpu mode='host-passthrough'>
	I0723 15:09:16.454319   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   
	I0723 15:09:16.454329   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   </cpu>
	I0723 15:09:16.454337   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   <os>
	I0723 15:09:16.454346   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <type>hvm</type>
	I0723 15:09:16.454358   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <boot dev='cdrom'/>
	I0723 15:09:16.454371   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <boot dev='hd'/>
	I0723 15:09:16.454415   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <bootmenu enable='no'/>
	I0723 15:09:16.454438   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   </os>
	I0723 15:09:16.454452   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   <devices>
	I0723 15:09:16.454465   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <disk type='file' device='cdrom'>
	I0723 15:09:16.454483   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/boot2docker.iso'/>
	I0723 15:09:16.454495   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <target dev='hdc' bus='scsi'/>
	I0723 15:09:16.454513   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <readonly/>
	I0723 15:09:16.454567   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     </disk>
	I0723 15:09:16.454594   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <disk type='file' device='disk'>
	I0723 15:09:16.454611   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 15:09:16.454629   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/kubernetes-upgrade-503350.rawdisk'/>
	I0723 15:09:16.454644   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <target dev='hda' bus='virtio'/>
	I0723 15:09:16.454652   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     </disk>
	I0723 15:09:16.454662   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <interface type='network'>
	I0723 15:09:16.454689   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <source network='mk-kubernetes-upgrade-503350'/>
	I0723 15:09:16.454700   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <model type='virtio'/>
	I0723 15:09:16.454711   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     </interface>
	I0723 15:09:16.454721   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <interface type='network'>
	I0723 15:09:16.454734   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <source network='default'/>
	I0723 15:09:16.454744   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <model type='virtio'/>
	I0723 15:09:16.454752   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     </interface>
	I0723 15:09:16.454762   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <serial type='pty'>
	I0723 15:09:16.454771   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <target port='0'/>
	I0723 15:09:16.454781   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     </serial>
	I0723 15:09:16.454805   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <console type='pty'>
	I0723 15:09:16.454820   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <target type='serial' port='0'/>
	I0723 15:09:16.454832   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     </console>
	I0723 15:09:16.454842   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     <rng model='virtio'>
	I0723 15:09:16.454855   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)       <backend model='random'>/dev/random</backend>
	I0723 15:09:16.454865   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     </rng>
	I0723 15:09:16.454873   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     
	I0723 15:09:16.454882   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)     
	I0723 15:09:16.454901   58358 main.go:141] libmachine: (kubernetes-upgrade-503350)   </devices>
	I0723 15:09:16.454918   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) </domain>
	I0723 15:09:16.454932   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) 
	I0723 15:09:16.458777   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:79:7f:28 in network default
	I0723 15:09:16.459484   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Ensuring networks are active...
	I0723 15:09:16.459511   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:16.460370   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Ensuring network default is active
	I0723 15:09:16.460787   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Ensuring network mk-kubernetes-upgrade-503350 is active
	I0723 15:09:16.461578   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Getting domain xml...
	I0723 15:09:16.462457   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Creating domain...
	I0723 15:09:18.018332   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Waiting to get IP...
	I0723 15:09:18.019649   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:18.020278   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:18.020310   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:18.020177   59473 retry.go:31] will retry after 257.371172ms: waiting for machine to come up
	I0723 15:09:18.280479   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:18.282766   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:18.282785   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:18.282683   59473 retry.go:31] will retry after 346.15927ms: waiting for machine to come up
	I0723 15:09:18.631153   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:18.631777   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:18.631803   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:18.631725   59473 retry.go:31] will retry after 420.553556ms: waiting for machine to come up
	I0723 15:09:19.054530   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:19.054993   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:19.055031   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:19.054961   59473 retry.go:31] will retry after 571.600612ms: waiting for machine to come up
	I0723 15:09:19.628584   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:19.629249   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:19.629277   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:19.629187   59473 retry.go:31] will retry after 526.955247ms: waiting for machine to come up
	I0723 15:09:20.158448   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:20.158942   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:20.158971   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:20.158896   59473 retry.go:31] will retry after 854.066465ms: waiting for machine to come up
	I0723 15:09:21.014369   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:21.014941   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:21.014986   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:21.014856   59473 retry.go:31] will retry after 925.010205ms: waiting for machine to come up
	I0723 15:09:21.941493   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:21.941968   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:21.941991   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:21.941910   59473 retry.go:31] will retry after 1.00929959s: waiting for machine to come up
	I0723 15:09:22.953036   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:22.953512   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:22.953548   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:22.953460   59473 retry.go:31] will retry after 1.227367942s: waiting for machine to come up
	I0723 15:09:24.183017   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:24.183450   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:24.183477   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:24.183398   59473 retry.go:31] will retry after 1.433697111s: waiting for machine to come up
	I0723 15:09:25.619375   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:25.619914   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:25.619943   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:25.619863   59473 retry.go:31] will retry after 2.401854507s: waiting for machine to come up
	I0723 15:09:28.025224   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:28.025779   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:28.025809   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:28.025727   59473 retry.go:31] will retry after 2.201474303s: waiting for machine to come up
	I0723 15:09:30.228596   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:30.229162   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:30.229191   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:30.229105   59473 retry.go:31] will retry after 4.199595983s: waiting for machine to come up
	I0723 15:09:34.431724   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:34.432360   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find current IP address of domain kubernetes-upgrade-503350 in network mk-kubernetes-upgrade-503350
	I0723 15:09:34.432392   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | I0723 15:09:34.432307   59473 retry.go:31] will retry after 3.910284302s: waiting for machine to come up
	I0723 15:09:38.344762   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.345154   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Found IP for machine: 192.168.61.132
	I0723 15:09:38.345175   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has current primary IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.345181   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Reserving static IP address...
	I0723 15:09:38.345776   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-503350", mac: "52:54:00:ae:94:16", ip: "192.168.61.132"} in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.439337   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Getting to WaitForSSH function...
	I0723 15:09:38.439363   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Reserved static IP address: 192.168.61.132
	I0723 15:09:38.439399   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Waiting for SSH to be available...
	I0723 15:09:38.442761   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.443228   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:38.443255   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.443462   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Using SSH client type: external
	I0723 15:09:38.443500   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa (-rw-------)
	I0723 15:09:38.443549   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:09:38.443590   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | About to run SSH command:
	I0723 15:09:38.443610   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | exit 0
	I0723 15:09:38.566411   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | SSH cmd err, output: <nil>: 
	I0723 15:09:38.566704   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) KVM machine creation complete!
	I0723 15:09:38.566970   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetConfigRaw
	I0723 15:09:38.567583   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:38.567787   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:38.568064   58358 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 15:09:38.568081   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetState
	I0723 15:09:38.569496   58358 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 15:09:38.569512   58358 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 15:09:38.569520   58358 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 15:09:38.569529   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:38.572344   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.572692   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:38.572712   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.572866   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:38.573023   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.573168   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.573292   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:38.573437   58358 main.go:141] libmachine: Using SSH client type: native
	I0723 15:09:38.573682   58358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:09:38.573703   58358 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 15:09:38.673782   58358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:09:38.673813   58358 main.go:141] libmachine: Detecting the provisioner...
	I0723 15:09:38.673821   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:38.676584   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.676952   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:38.676985   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.677149   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:38.677341   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.677505   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.677606   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:38.677762   58358 main.go:141] libmachine: Using SSH client type: native
	I0723 15:09:38.677966   58358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:09:38.677980   58358 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 15:09:38.775179   58358 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 15:09:38.775301   58358 main.go:141] libmachine: found compatible host: buildroot
	I0723 15:09:38.775318   58358 main.go:141] libmachine: Provisioning with buildroot...
	I0723 15:09:38.775330   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetMachineName
	I0723 15:09:38.775616   58358 buildroot.go:166] provisioning hostname "kubernetes-upgrade-503350"
	I0723 15:09:38.775646   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetMachineName
	I0723 15:09:38.775863   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:38.778771   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.779181   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:38.779210   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.779305   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:38.779479   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.779702   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.779842   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:38.779999   58358 main.go:141] libmachine: Using SSH client type: native
	I0723 15:09:38.780162   58358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:09:38.780173   58358 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-503350 && echo "kubernetes-upgrade-503350" | sudo tee /etc/hostname
	I0723 15:09:38.892696   58358 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-503350
	
	I0723 15:09:38.892724   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:38.895879   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.896273   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:38.896315   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:38.896469   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:38.896644   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.896758   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:38.896914   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:38.897127   58358 main.go:141] libmachine: Using SSH client type: native
	I0723 15:09:38.897335   58358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:09:38.897359   58358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-503350' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-503350/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-503350' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:09:39.002537   58358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:09:39.002565   58358 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:09:39.002584   58358 buildroot.go:174] setting up certificates
	I0723 15:09:39.002593   58358 provision.go:84] configureAuth start
	I0723 15:09:39.002601   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetMachineName
	I0723 15:09:39.002950   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetIP
	I0723 15:09:39.005930   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.006337   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.006414   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.006513   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:39.008850   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.009220   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.009249   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.009394   58358 provision.go:143] copyHostCerts
	I0723 15:09:39.009472   58358 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:09:39.009487   58358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:09:39.009553   58358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:09:39.009703   58358 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:09:39.009716   58358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:09:39.009761   58358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:09:39.009841   58358 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:09:39.009855   58358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:09:39.009880   58358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:09:39.009924   58358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-503350 san=[127.0.0.1 192.168.61.132 kubernetes-upgrade-503350 localhost minikube]
	I0723 15:09:39.057900   58358 provision.go:177] copyRemoteCerts
	I0723 15:09:39.057971   58358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:09:39.057995   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:39.060995   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.061413   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.061445   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.061639   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:39.061850   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.061985   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:39.062102   58358 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:09:39.140385   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:09:39.163294   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0723 15:09:39.185992   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:09:39.208530   58358 provision.go:87] duration metric: took 205.922775ms to configureAuth
	I0723 15:09:39.208562   58358 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:09:39.208729   58358 config.go:182] Loaded profile config "kubernetes-upgrade-503350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:09:39.208797   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:39.211782   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.212097   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.212129   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.212302   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:39.212505   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.212658   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.212793   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:39.213031   58358 main.go:141] libmachine: Using SSH client type: native
	I0723 15:09:39.213275   58358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:09:39.213300   58358 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:09:39.467976   58358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:09:39.468004   58358 main.go:141] libmachine: Checking connection to Docker...
	I0723 15:09:39.468012   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetURL
	I0723 15:09:39.469396   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | Using libvirt version 6000000
	I0723 15:09:39.471972   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.472424   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.472450   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.472680   58358 main.go:141] libmachine: Docker is up and running!
	I0723 15:09:39.472696   58358 main.go:141] libmachine: Reticulating splines...
	I0723 15:09:39.472704   58358 client.go:171] duration metric: took 23.521281724s to LocalClient.Create
	I0723 15:09:39.472731   58358 start.go:167] duration metric: took 23.521346279s to libmachine.API.Create "kubernetes-upgrade-503350"
	I0723 15:09:39.472742   58358 start.go:293] postStartSetup for "kubernetes-upgrade-503350" (driver="kvm2")
	I0723 15:09:39.472758   58358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:09:39.472780   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:39.472999   58358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:09:39.473023   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:39.475300   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.475712   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.475744   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.475881   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:39.476101   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.476271   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:39.476416   58358 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:09:39.560410   58358 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:09:39.564204   58358 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:09:39.564225   58358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:09:39.564286   58358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:09:39.564397   58358 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:09:39.564496   58358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:09:39.573215   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:09:39.595658   58358 start.go:296] duration metric: took 122.900893ms for postStartSetup
	I0723 15:09:39.595703   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetConfigRaw
	I0723 15:09:39.596368   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetIP
	I0723 15:09:39.598984   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.599374   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.599403   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.599632   58358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/config.json ...
	I0723 15:09:39.599807   58358 start.go:128] duration metric: took 23.671020385s to createHost
	I0723 15:09:39.599839   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:39.602148   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.602496   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.602531   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.602668   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:39.602859   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.603022   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.603150   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:39.603299   58358 main.go:141] libmachine: Using SSH client type: native
	I0723 15:09:39.603492   58358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:09:39.603505   58358 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0723 15:09:39.699027   58358 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721747379.677398754
	
	I0723 15:09:39.699051   58358 fix.go:216] guest clock: 1721747379.677398754
	I0723 15:09:39.699057   58358 fix.go:229] Guest: 2024-07-23 15:09:39.677398754 +0000 UTC Remote: 2024-07-23 15:09:39.599819029 +0000 UTC m=+30.848586601 (delta=77.579725ms)
	I0723 15:09:39.699077   58358 fix.go:200] guest clock delta is within tolerance: 77.579725ms
	I0723 15:09:39.699084   58358 start.go:83] releasing machines lock for "kubernetes-upgrade-503350", held for 23.770457932s
	I0723 15:09:39.699113   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:39.699373   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetIP
	I0723 15:09:39.702540   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.702924   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.702954   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.703121   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:39.703658   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:39.703829   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:09:39.704016   58358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:09:39.704050   58358 ssh_runner.go:195] Run: cat /version.json
	I0723 15:09:39.704070   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:39.704076   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:09:39.706656   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.706841   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.707013   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.707056   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.707196   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:39.707251   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:39.707267   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:39.707358   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.707401   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:09:39.707525   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:39.707539   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:09:39.707677   58358 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:09:39.707749   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:09:39.707918   58358 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:09:39.823196   58358 ssh_runner.go:195] Run: systemctl --version
	I0723 15:09:39.829474   58358 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:09:40.000499   58358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:09:40.008671   58358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:09:40.008756   58358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:09:40.026681   58358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:09:40.026713   58358 start.go:495] detecting cgroup driver to use...
	I0723 15:09:40.026791   58358 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:09:40.042834   58358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:09:40.057247   58358 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:09:40.057317   58358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:09:40.071885   58358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:09:40.085914   58358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:09:40.209424   58358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:09:40.402644   58358 docker.go:233] disabling docker service ...
	I0723 15:09:40.402712   58358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:09:40.419944   58358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:09:40.433937   58358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:09:40.568365   58358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:09:40.700256   58358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:09:40.713259   58358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:09:40.732389   58358 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:09:40.732435   58358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:09:40.743062   58358 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:09:40.743127   58358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:09:40.753777   58358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:09:40.764361   58358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:09:40.774473   58358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:09:40.785088   58358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:09:40.795695   58358 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:09:40.795746   58358 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:09:40.812497   58358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:09:40.824264   58358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:09:40.960349   58358 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:09:41.100570   58358 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:09:41.100628   58358 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:09:41.105155   58358 start.go:563] Will wait 60s for crictl version
	I0723 15:09:41.105213   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:41.108716   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:09:41.151366   58358 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:09:41.151468   58358 ssh_runner.go:195] Run: crio --version
	I0723 15:09:41.182083   58358 ssh_runner.go:195] Run: crio --version
	I0723 15:09:41.214689   58358 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:09:41.216271   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetIP
	I0723 15:09:41.219558   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:41.220048   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:09:31 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:09:41.220069   58358 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:09:41.220288   58358 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:09:41.224932   58358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:09:41.240117   58358 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:09:41.240220   58358 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:09:41.240265   58358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:09:41.274753   58358 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:09:41.274818   58358 ssh_runner.go:195] Run: which lz4
	I0723 15:09:41.278404   58358 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0723 15:09:41.282531   58358 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:09:41.282563   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:09:42.779086   58358 crio.go:462] duration metric: took 1.500730027s to copy over tarball
	I0723 15:09:42.779171   58358 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:09:45.375376   58358 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.59612324s)
	I0723 15:09:45.375427   58358 crio.go:469] duration metric: took 2.596305885s to extract the tarball
	I0723 15:09:45.375436   58358 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:09:45.418549   58358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:09:45.465511   58358 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:09:45.465539   58358 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:09:45.465623   58358 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:09:45.465659   58358 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:09:45.465676   58358 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:09:45.465680   58358 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:09:45.465737   58358 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:09:45.465767   58358 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:09:45.465640   58358 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:09:45.465688   58358 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:09:45.467254   58358 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:09:45.467259   58358 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:09:45.467255   58358 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:09:45.467302   58358 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:09:45.467335   58358 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:09:45.467470   58358 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:09:45.467483   58358 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:09:45.467515   58358 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:09:45.745181   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:09:45.756802   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:09:45.770904   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:09:45.771013   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:09:45.783282   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:09:45.792412   58358 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:09:45.792475   58358 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:09:45.792521   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:45.794897   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:09:45.814663   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:09:45.855607   58358 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:09:45.855651   58358 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:09:45.855700   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:45.917010   58358 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:09:45.917040   58358 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:09:45.917056   58358 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:09:45.917074   58358 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:09:45.917112   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:45.917120   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:45.931074   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:09:45.931091   58358 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:09:45.931127   58358 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:09:45.931156   58358 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:09:45.931172   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:45.931190   58358 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:09:45.931229   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:45.935041   58358 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:09:45.935074   58358 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:09:45.935089   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:09:45.935104   58358 ssh_runner.go:195] Run: which crictl
	I0723 15:09:45.936938   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:09:45.937015   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:09:45.981170   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:09:45.981221   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:09:45.981247   58358 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:09:45.981270   58358 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:09:46.036293   58358 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:09:46.037407   58358 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:09:46.058103   58358 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:09:46.088405   58358 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:09:46.088444   58358 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:09:46.088512   58358 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:09:46.350411   58358 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:09:46.495251   58358 cache_images.go:92] duration metric: took 1.029688837s to LoadCachedImages
	W0723 15:09:46.495346   58358 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0723 15:09:46.495365   58358 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I0723 15:09:46.495487   58358 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-503350 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:09:46.495570   58358 ssh_runner.go:195] Run: crio config
	I0723 15:09:46.550983   58358 cni.go:84] Creating CNI manager for ""
	I0723 15:09:46.551019   58358 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:09:46.551036   58358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:09:46.551062   58358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-503350 NodeName:kubernetes-upgrade-503350 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:09:46.551255   58358 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-503350"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:09:46.551334   58358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:09:46.561316   58358 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:09:46.561392   58358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:09:46.571376   58358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0723 15:09:46.590528   58358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:09:46.607862   58358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0723 15:09:46.624829   58358 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I0723 15:09:46.628820   58358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:09:46.641717   58358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:09:46.759602   58358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:09:46.777148   58358 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350 for IP: 192.168.61.132
	I0723 15:09:46.777179   58358 certs.go:194] generating shared ca certs ...
	I0723 15:09:46.777201   58358 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:46.777378   58358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:09:46.777436   58358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:09:46.777451   58358 certs.go:256] generating profile certs ...
	I0723 15:09:46.777524   58358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/client.key
	I0723 15:09:46.777548   58358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/client.crt with IP's: []
	I0723 15:09:46.965238   58358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/client.crt ...
	I0723 15:09:46.965278   58358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/client.crt: {Name:mk7cfd78d6a0ab259b062258d5d15b01f1e6718a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:46.965437   58358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/client.key ...
	I0723 15:09:46.965452   58358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/client.key: {Name:mk7687200d461c470715125cc322234c7ec5fcb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:46.965538   58358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key.70e02c22
	I0723 15:09:46.965555   58358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.crt.70e02c22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.132]
	I0723 15:09:47.049913   58358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.crt.70e02c22 ...
	I0723 15:09:47.049945   58358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.crt.70e02c22: {Name:mk4ab93e0d78e855ceac3ceccddd773138b44ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:47.050108   58358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key.70e02c22 ...
	I0723 15:09:47.050122   58358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key.70e02c22: {Name:mka7415f1c51402b6c25efdbe63fcd72b614b338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:47.050197   58358 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.crt.70e02c22 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.crt
	I0723 15:09:47.050295   58358 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key.70e02c22 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key
	I0723 15:09:47.050351   58358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.key
	I0723 15:09:47.050366   58358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.crt with IP's: []
	I0723 15:09:47.262394   58358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.crt ...
	I0723 15:09:47.262433   58358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.crt: {Name:mkd403740923b760b76ba5a15ba13fa091113c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:47.262618   58358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.key ...
	I0723 15:09:47.262634   58358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.key: {Name:mk0ee90041e7b34c1498507eb86820d5b4bf2790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:47.262896   58358 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:09:47.262948   58358 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:09:47.262960   58358 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:09:47.262988   58358 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:09:47.263018   58358 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:09:47.263045   58358 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:09:47.263088   58358 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:09:47.263680   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:09:47.290282   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:09:47.314668   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:09:47.339310   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:09:47.365139   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0723 15:09:47.391105   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:09:47.415262   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:09:47.441196   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:09:47.468340   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:09:47.495181   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:09:47.522190   58358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:09:47.548614   58358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:09:47.571394   58358 ssh_runner.go:195] Run: openssl version
	I0723 15:09:47.580513   58358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:09:47.595732   58358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:09:47.600709   58358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:09:47.600786   58358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:09:47.607054   58358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:09:47.625570   58358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:09:47.645640   58358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:09:47.653821   58358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:09:47.653985   58358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:09:47.661423   58358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:09:47.679758   58358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:09:47.695960   58358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:09:47.706502   58358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:09:47.706567   58358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:09:47.713446   58358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:09:47.731813   58358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:09:47.738004   58358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 15:09:47.738063   58358 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:09:47.738155   58358 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:09:47.738214   58358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:09:47.779690   58358 cri.go:89] found id: ""
	I0723 15:09:47.779801   58358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:09:47.790462   58358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:09:47.800485   58358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:09:47.810010   58358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:09:47.810028   58358 kubeadm.go:157] found existing configuration files:
	
	I0723 15:09:47.810067   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:09:47.819308   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:09:47.819383   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:09:47.828887   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:09:47.838652   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:09:47.838706   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:09:47.848173   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:09:47.857599   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:09:47.857665   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:09:47.867218   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:09:47.876440   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:09:47.876500   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:09:47.886212   58358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:09:48.032186   58358 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:09:48.032270   58358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:09:48.204030   58358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:09:48.204190   58358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:09:48.204319   58358 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:09:48.400712   58358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:09:48.529817   58358 out.go:204]   - Generating certificates and keys ...
	I0723 15:09:48.529974   58358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:09:48.530055   58358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:09:48.530142   58358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 15:09:48.566502   58358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 15:09:49.005957   58358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 15:09:49.271920   58358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 15:09:49.398937   58358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 15:09:49.399089   58358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-503350 localhost] and IPs [192.168.61.132 127.0.0.1 ::1]
	I0723 15:09:49.538928   58358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 15:09:49.539084   58358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-503350 localhost] and IPs [192.168.61.132 127.0.0.1 ::1]
	I0723 15:09:49.725264   58358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 15:09:49.894677   58358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 15:09:49.999850   58358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 15:09:49.999968   58358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:09:50.105991   58358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:09:50.230544   58358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:09:50.519083   58358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:09:50.947385   58358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:09:50.965263   58358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:09:50.967167   58358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:09:50.967379   58358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:09:51.107414   58358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:09:51.110372   58358 out.go:204]   - Booting up control plane ...
	I0723 15:09:51.110562   58358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:09:51.116306   58358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:09:51.117944   58358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:09:51.119424   58358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:09:51.123489   58358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:10:31.120669   58358 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:10:31.126579   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:10:31.126842   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:10:36.127266   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:10:36.127536   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:10:46.128104   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:10:46.128347   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:11:06.129840   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:11:06.130028   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:11:46.129862   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:11:46.130087   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:11:46.130109   58358 kubeadm.go:310] 
	I0723 15:11:46.130159   58358 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:11:46.130198   58358 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:11:46.130206   58358 kubeadm.go:310] 
	I0723 15:11:46.130253   58358 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:11:46.130291   58358 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:11:46.130447   58358 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:11:46.130457   58358 kubeadm.go:310] 
	I0723 15:11:46.130549   58358 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:11:46.130620   58358 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:11:46.130680   58358 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:11:46.130690   58358 kubeadm.go:310] 
	I0723 15:11:46.130833   58358 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:11:46.130943   58358 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:11:46.130954   58358 kubeadm.go:310] 
	I0723 15:11:46.131108   58358 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:11:46.131229   58358 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:11:46.131330   58358 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:11:46.131439   58358 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:11:46.131462   58358 kubeadm.go:310] 
	I0723 15:11:46.131882   58358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:11:46.132013   58358 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:11:46.132106   58358 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0723 15:11:46.132249   58358 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-503350 localhost] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-503350 localhost] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-503350 localhost] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-503350 localhost] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:11:46.132306   58358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:11:47.009463   58358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:11:47.023826   58358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:11:47.032929   58358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:11:47.032948   58358 kubeadm.go:157] found existing configuration files:
	
	I0723 15:11:47.033001   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:11:47.041427   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:11:47.041482   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:11:47.050285   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:11:47.058667   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:11:47.058734   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:11:47.067275   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:11:47.075592   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:11:47.075644   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:11:47.084335   58358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:11:47.092407   58358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:11:47.092473   58358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:11:47.100774   58358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:11:47.285833   58358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:13:43.317599   58358 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:13:43.317751   58358 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:13:43.319127   58358 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:13:43.319207   58358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:13:43.319304   58358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:13:43.319384   58358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:13:43.319465   58358 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:13:43.319536   58358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:13:43.321175   58358 out.go:204]   - Generating certificates and keys ...
	I0723 15:13:43.321267   58358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:13:43.321325   58358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:13:43.321416   58358 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:13:43.321494   58358 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:13:43.321574   58358 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:13:43.321624   58358 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:13:43.321676   58358 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:13:43.321727   58358 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:13:43.321792   58358 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:13:43.321855   58358 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:13:43.321888   58358 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:13:43.321949   58358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:13:43.321999   58358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:13:43.322071   58358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:13:43.322155   58358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:13:43.322229   58358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:13:43.322359   58358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:13:43.322497   58358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:13:43.322536   58358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:13:43.322612   58358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:13:43.325076   58358 out.go:204]   - Booting up control plane ...
	I0723 15:13:43.325162   58358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:13:43.325243   58358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:13:43.325312   58358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:13:43.325382   58358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:13:43.325510   58358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:13:43.325560   58358 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:13:43.325617   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:43.325783   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:43.325852   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:43.326050   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:43.326150   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:43.326319   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:43.326390   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:43.326560   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:43.326645   58358 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:43.326829   58358 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:43.326842   58358 kubeadm.go:310] 
	I0723 15:13:43.326907   58358 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:13:43.326977   58358 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:13:43.326983   58358 kubeadm.go:310] 
	I0723 15:13:43.327012   58358 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:13:43.327040   58358 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:13:43.327133   58358 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:13:43.327139   58358 kubeadm.go:310] 
	I0723 15:13:43.327236   58358 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:13:43.327281   58358 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:13:43.327319   58358 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:13:43.327330   58358 kubeadm.go:310] 
	I0723 15:13:43.327415   58358 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:13:43.327486   58358 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:13:43.327492   58358 kubeadm.go:310] 
	I0723 15:13:43.327590   58358 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:13:43.327665   58358 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:13:43.327727   58358 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:13:43.327804   58358 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:13:43.327825   58358 kubeadm.go:310] 
	I0723 15:13:43.327871   58358 kubeadm.go:394] duration metric: took 3m55.589812418s to StartCluster
	I0723 15:13:43.327908   58358 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:13:43.327958   58358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:13:43.366982   58358 cri.go:89] found id: ""
	I0723 15:13:43.367014   58358 logs.go:276] 0 containers: []
	W0723 15:13:43.367025   58358 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:13:43.367033   58358 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:13:43.367101   58358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:13:43.398792   58358 cri.go:89] found id: ""
	I0723 15:13:43.398817   58358 logs.go:276] 0 containers: []
	W0723 15:13:43.398824   58358 logs.go:278] No container was found matching "etcd"
	I0723 15:13:43.398830   58358 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:13:43.398880   58358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:13:43.430608   58358 cri.go:89] found id: ""
	I0723 15:13:43.430637   58358 logs.go:276] 0 containers: []
	W0723 15:13:43.430649   58358 logs.go:278] No container was found matching "coredns"
	I0723 15:13:43.430656   58358 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:13:43.430727   58358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:13:43.461760   58358 cri.go:89] found id: ""
	I0723 15:13:43.461784   58358 logs.go:276] 0 containers: []
	W0723 15:13:43.461792   58358 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:13:43.461797   58358 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:13:43.461850   58358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:13:43.493790   58358 cri.go:89] found id: ""
	I0723 15:13:43.493820   58358 logs.go:276] 0 containers: []
	W0723 15:13:43.493828   58358 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:13:43.493836   58358 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:13:43.493889   58358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:13:43.526561   58358 cri.go:89] found id: ""
	I0723 15:13:43.526585   58358 logs.go:276] 0 containers: []
	W0723 15:13:43.526593   58358 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:13:43.526598   58358 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:13:43.526647   58358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:13:43.557562   58358 cri.go:89] found id: ""
	I0723 15:13:43.557589   58358 logs.go:276] 0 containers: []
	W0723 15:13:43.557598   58358 logs.go:278] No container was found matching "kindnet"
	I0723 15:13:43.557607   58358 logs.go:123] Gathering logs for kubelet ...
	I0723 15:13:43.557621   58358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:13:43.610101   58358 logs.go:123] Gathering logs for dmesg ...
	I0723 15:13:43.610143   58358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:13:43.623145   58358 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:13:43.623176   58358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:13:43.730624   58358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:13:43.730651   58358 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:13:43.730666   58358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:13:43.825333   58358 logs.go:123] Gathering logs for container status ...
	I0723 15:13:43.825372   58358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0723 15:13:43.870643   58358 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:13:43.870698   58358 out.go:239] * 
	* 
	W0723 15:13:43.870766   58358 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:13:43.870801   58358 out.go:239] * 
	* 
	W0723 15:13:43.871994   58358 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:13:43.875970   58358 out.go:177] 
	W0723 15:13:43.877199   58358 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:13:43.877273   58358 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:13:43.877302   58358 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:13:43.879597   58358 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-503350
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-503350: (6.281251031s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-503350 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-503350 status --format={{.Host}}: exit status 7 (63.453823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.07248876s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-503350 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.665251ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-503350] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-503350
	    minikube start -p kubernetes-upgrade-503350 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5033502 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-503350 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-503350 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.328195653s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-23 15:15:00.853955012 +0000 UTC m=+4710.059699730
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-503350 -n kubernetes-upgrade-503350
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-503350 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-503350 logs -n 25: (1.634272285s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-562147 sudo                                 | cilium-562147             | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-562147 sudo find                            | cilium-562147             | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-562147 sudo crio                            | cilium-562147             | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC |                     |
	|         | config                                                |                           |         |         |                     |                     |
	| delete  | -p cilium-562147                                      | cilium-562147             | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:09 UTC |
	| start   | -p stopped-upgrade-193974                             | minikube                  | jenkins | v1.26.0 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:10 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-534062 ssh                               | cert-options-534062       | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:09 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-534062 -- sudo                        | cert-options-534062       | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:09 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-534062                                | cert-options-534062       | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:09 UTC |
	| start   | -p old-k8s-version-000272                             | old-k8s-version-000272    | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-193974 stop                           | minikube                  | jenkins | v1.26.0 | 23 Jul 24 15:10 UTC | 23 Jul 24 15:10 UTC |
	| start   | -p cert-expiration-457920                             | cert-expiration-457920    | jenkins | v1.33.1 | 23 Jul 24 15:10 UTC | 23 Jul 24 15:11 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-193974                             | stopped-upgrade-193974    | jenkins | v1.33.1 | 23 Jul 24 15:10 UTC | 23 Jul 24 15:11 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-457920                             | cert-expiration-457920    | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p no-preload-543029 --memory=2200                    | no-preload-543029         | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:12 UTC |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-193974                             | stopped-upgrade-193974    | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p embed-certs-486436                                 | embed-certs-486436        | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:13 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-543029            | no-preload-543029         | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-543029                                  | no-preload-543029         | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-486436           | embed-certs-486436        | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-486436                                 | embed-certs-486436        | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-503350                          | kubernetes-upgrade-503350 | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                          | kubernetes-upgrade-503350 | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                          | kubernetes-upgrade-503350 | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                          | kubernetes-upgrade-503350 | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272       | old-k8s-version-000272    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:14:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:14:30.564323   64021 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:14:30.564420   64021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:14:30.564424   64021 out.go:304] Setting ErrFile to fd 2...
	I0723 15:14:30.564428   64021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:14:30.564603   64021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:14:30.565136   64021 out.go:298] Setting JSON to false
	I0723 15:14:30.566077   64021 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7017,"bootTime":1721740654,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:14:30.566135   64021 start.go:139] virtualization: kvm guest
	I0723 15:14:30.568161   64021 out.go:177] * [kubernetes-upgrade-503350] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:14:30.569447   64021 notify.go:220] Checking for updates...
	I0723 15:14:30.569472   64021 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:14:30.570792   64021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:14:30.571931   64021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:14:30.573021   64021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:14:30.574179   64021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:14:30.575388   64021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:14:30.576998   64021 config.go:182] Loaded profile config "kubernetes-upgrade-503350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:14:30.577400   64021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:14:30.577446   64021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:14:30.593721   64021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0723 15:14:30.594111   64021 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:14:30.594875   64021 main.go:141] libmachine: Using API Version  1
	I0723 15:14:30.594903   64021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:14:30.595239   64021 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:14:30.595469   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:30.595766   64021 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:14:30.596218   64021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:14:30.596264   64021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:14:30.611273   64021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0723 15:14:30.611822   64021 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:14:30.612389   64021 main.go:141] libmachine: Using API Version  1
	I0723 15:14:30.612420   64021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:14:30.612782   64021 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:14:30.612981   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:30.651247   64021 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:14:30.652570   64021 start.go:297] selected driver: kvm2
	I0723 15:14:30.652588   64021 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:14:30.652729   64021 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:14:30.653536   64021 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:14:30.653619   64021 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:14:30.674363   64021 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:14:30.674862   64021 cni.go:84] Creating CNI manager for ""
	I0723 15:14:30.674881   64021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:14:30.674936   64021 start.go:340] cluster config:
	{Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-503350 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:14:30.675072   64021 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:14:30.677158   64021 out.go:177] * Starting "kubernetes-upgrade-503350" primary control-plane node in "kubernetes-upgrade-503350" cluster
	I0723 15:14:30.678451   64021 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:14:30.678489   64021 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0723 15:14:30.678497   64021 cache.go:56] Caching tarball of preloaded images
	I0723 15:14:30.678582   64021 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:14:30.678598   64021 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0723 15:14:30.678718   64021 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/config.json ...
	I0723 15:14:30.678962   64021 start.go:360] acquireMachinesLock for kubernetes-upgrade-503350: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:14:30.679011   64021 start.go:364] duration metric: took 31.637µs to acquireMachinesLock for "kubernetes-upgrade-503350"
	I0723 15:14:30.679030   64021 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:14:30.679036   64021 fix.go:54] fixHost starting: 
	I0723 15:14:30.679466   64021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:14:30.679501   64021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:14:30.694456   64021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0723 15:14:30.694935   64021 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:14:30.695527   64021 main.go:141] libmachine: Using API Version  1
	I0723 15:14:30.695555   64021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:14:30.695895   64021 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:14:30.696106   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:30.696255   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetState
	I0723 15:14:30.698469   64021 fix.go:112] recreateIfNeeded on kubernetes-upgrade-503350: state=Running err=<nil>
	W0723 15:14:30.698491   64021 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:14:30.700494   64021 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-503350" VM ...
	I0723 15:14:30.587817   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:14:30.588089   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:14:30.588115   61145 kubeadm.go:310] 
	I0723 15:14:30.588169   61145 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:14:30.588227   61145 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:14:30.588237   61145 kubeadm.go:310] 
	I0723 15:14:30.588278   61145 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:14:30.588331   61145 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:14:30.588483   61145 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:14:30.588496   61145 kubeadm.go:310] 
	I0723 15:14:30.588644   61145 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:14:30.588705   61145 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:14:30.588750   61145 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:14:30.588761   61145 kubeadm.go:310] 
	I0723 15:14:30.588891   61145 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:14:30.589002   61145 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:14:30.589015   61145 kubeadm.go:310] 
	I0723 15:14:30.589170   61145 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:14:30.589428   61145 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:14:30.589583   61145 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:14:30.589691   61145 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:14:30.589702   61145 kubeadm.go:310] 
	I0723 15:14:30.590661   61145 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:14:30.590791   61145 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:14:30.590890   61145 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:14:30.590961   61145 kubeadm.go:394] duration metric: took 3m55.255086134s to StartCluster
	I0723 15:14:30.591024   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:14:30.591087   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:14:30.640630   61145 cri.go:89] found id: ""
	I0723 15:14:30.640658   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.640669   61145 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:14:30.640676   61145 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:14:30.640732   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:14:30.678920   61145 cri.go:89] found id: ""
	I0723 15:14:30.678946   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.678954   61145 logs.go:278] No container was found matching "etcd"
	I0723 15:14:30.678962   61145 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:14:30.679023   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:14:30.717609   61145 cri.go:89] found id: ""
	I0723 15:14:30.717633   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.717642   61145 logs.go:278] No container was found matching "coredns"
	I0723 15:14:30.717649   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:14:30.717700   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:14:30.775955   61145 cri.go:89] found id: ""
	I0723 15:14:30.775986   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.775995   61145 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:14:30.776003   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:14:30.776069   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:14:30.810116   61145 cri.go:89] found id: ""
	I0723 15:14:30.810144   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.810155   61145 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:14:30.810163   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:14:30.810224   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:14:30.844174   61145 cri.go:89] found id: ""
	I0723 15:14:30.844203   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.844214   61145 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:14:30.844222   61145 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:14:30.844284   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:14:30.880638   61145 cri.go:89] found id: ""
	I0723 15:14:30.880671   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.880681   61145 logs.go:278] No container was found matching "kindnet"
	I0723 15:14:30.880693   61145 logs.go:123] Gathering logs for dmesg ...
	I0723 15:14:30.880709   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:14:30.895113   61145 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:14:30.895140   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:14:31.031289   61145 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:14:31.031311   61145 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:14:31.031325   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:14:31.131316   61145 logs.go:123] Gathering logs for container status ...
	I0723 15:14:31.131352   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:14:31.181186   61145 logs.go:123] Gathering logs for kubelet ...
	I0723 15:14:31.181215   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 15:14:31.230540   61145 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:14:31.230594   61145 out.go:239] * 
	W0723 15:14:31.230658   61145 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:14:31.230686   61145 out.go:239] * 
	W0723 15:14:31.231504   61145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:14:31.234028   61145 out.go:177] 
	W0723 15:14:31.235075   61145 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:14:31.235116   61145 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:14:31.235141   61145 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:14:31.236539   61145 out.go:177] 
	I0723 15:14:30.701847   64021 machine.go:94] provisionDockerMachine start ...
	I0723 15:14:30.701869   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:30.702134   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:30.705420   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:30.705831   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:30.705868   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:30.706021   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:30.706196   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:30.706432   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:30.706624   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:30.706813   64021 main.go:141] libmachine: Using SSH client type: native
	I0723 15:14:30.707065   64021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:14:30.707082   64021 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:14:30.890482   64021 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-503350
	
	I0723 15:14:30.890512   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetMachineName
	I0723 15:14:30.890792   64021 buildroot.go:166] provisioning hostname "kubernetes-upgrade-503350"
	I0723 15:14:30.890841   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetMachineName
	I0723 15:14:30.891059   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:30.894485   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:30.894866   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:30.894908   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:30.895099   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:30.895324   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:30.895503   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:30.895639   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:30.895795   64021 main.go:141] libmachine: Using SSH client type: native
	I0723 15:14:30.896004   64021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:14:30.896023   64021 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-503350 && echo "kubernetes-upgrade-503350" | sudo tee /etc/hostname
	I0723 15:14:31.031990   64021 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-503350
	
	I0723 15:14:31.032019   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:31.034958   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.035296   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:31.035328   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.035547   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:31.035745   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:31.035940   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:31.036074   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:31.036251   64021 main.go:141] libmachine: Using SSH client type: native
	I0723 15:14:31.036464   64021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:14:31.036493   64021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-503350' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-503350/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-503350' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:14:31.143543   64021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:14:31.143573   64021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:14:31.143608   64021 buildroot.go:174] setting up certificates
	I0723 15:14:31.143620   64021 provision.go:84] configureAuth start
	I0723 15:14:31.143636   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetMachineName
	I0723 15:14:31.143906   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetIP
	I0723 15:14:31.147166   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.147543   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:31.147571   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.147774   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:31.150474   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.150973   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:31.151002   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.151144   64021 provision.go:143] copyHostCerts
	I0723 15:14:31.151204   64021 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:14:31.151233   64021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:14:31.151324   64021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:14:31.151507   64021 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:14:31.151521   64021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:14:31.151566   64021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:14:31.151647   64021 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:14:31.151657   64021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:14:31.151691   64021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:14:31.151755   64021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-503350 san=[127.0.0.1 192.168.61.132 kubernetes-upgrade-503350 localhost minikube]
	I0723 15:14:31.240405   64021 provision.go:177] copyRemoteCerts
	I0723 15:14:31.240458   64021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:14:31.240480   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:31.243614   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.244001   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:31.244033   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.244398   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:31.244592   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:31.244774   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:31.244932   64021 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:14:31.333776   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:14:31.363553   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0723 15:14:31.399667   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:14:31.427569   64021 provision.go:87] duration metric: took 283.926245ms to configureAuth
	I0723 15:14:31.427601   64021 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:14:31.427812   64021 config.go:182] Loaded profile config "kubernetes-upgrade-503350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:14:31.427896   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:31.430988   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.431364   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:31.431391   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:31.431576   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:31.431776   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:31.431949   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:31.432117   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:31.432312   64021 main.go:141] libmachine: Using SSH client type: native
	I0723 15:14:31.432518   64021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:14:31.432537   64021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:14:37.396968   64021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:14:37.396995   64021 machine.go:97] duration metric: took 6.695134601s to provisionDockerMachine
	I0723 15:14:37.397006   64021 start.go:293] postStartSetup for "kubernetes-upgrade-503350" (driver="kvm2")
	I0723 15:14:37.397017   64021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:14:37.397033   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:37.397462   64021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:14:37.397492   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:37.401084   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.401572   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:37.401621   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.401779   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:37.401993   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:37.402178   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:37.402320   64021 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:14:37.488388   64021 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:14:37.492302   64021 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:14:37.492330   64021 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:14:37.492402   64021 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:14:37.492500   64021 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:14:37.492637   64021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:14:37.501827   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:14:37.524021   64021 start.go:296] duration metric: took 127.000878ms for postStartSetup
	I0723 15:14:37.524059   64021 fix.go:56] duration metric: took 6.845023644s for fixHost
	I0723 15:14:37.524078   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:37.526710   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.527074   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:37.527118   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.527307   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:37.527516   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:37.527673   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:37.527819   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:37.528037   64021 main.go:141] libmachine: Using SSH client type: native
	I0723 15:14:37.528232   64021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I0723 15:14:37.528248   64021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:14:37.626767   64021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721747677.616903222
	
	I0723 15:14:37.626801   64021 fix.go:216] guest clock: 1721747677.616903222
	I0723 15:14:37.626828   64021 fix.go:229] Guest: 2024-07-23 15:14:37.616903222 +0000 UTC Remote: 2024-07-23 15:14:37.524063246 +0000 UTC m=+6.994645069 (delta=92.839976ms)
	I0723 15:14:37.626855   64021 fix.go:200] guest clock delta is within tolerance: 92.839976ms
	I0723 15:14:37.626862   64021 start.go:83] releasing machines lock for "kubernetes-upgrade-503350", held for 6.947838739s
	I0723 15:14:37.626887   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:37.627153   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetIP
	I0723 15:14:37.630090   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.630438   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:37.630472   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.630690   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:37.631234   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:37.631508   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .DriverName
	I0723 15:14:37.631601   64021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:14:37.631652   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:37.631777   64021 ssh_runner.go:195] Run: cat /version.json
	I0723 15:14:37.631800   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHHostname
	I0723 15:14:37.634531   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.634595   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.634990   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:37.635017   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.635045   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:37.635073   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:37.635115   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:37.635302   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHPort
	I0723 15:14:37.635327   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:37.635477   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:37.635535   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHKeyPath
	I0723 15:14:37.635645   64021 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:14:37.635720   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetSSHUsername
	I0723 15:14:37.635839   64021 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/kubernetes-upgrade-503350/id_rsa Username:docker}
	I0723 15:14:37.711258   64021 ssh_runner.go:195] Run: systemctl --version
	I0723 15:14:37.747551   64021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:14:37.899290   64021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:14:37.904759   64021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:14:37.904837   64021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:14:37.913849   64021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 15:14:37.913872   64021 start.go:495] detecting cgroup driver to use...
	I0723 15:14:37.913931   64021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:14:37.929446   64021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:14:37.943276   64021 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:14:37.943348   64021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:14:37.956743   64021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:14:37.969658   64021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:14:38.101295   64021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:14:38.231744   64021 docker.go:233] disabling docker service ...
	I0723 15:14:38.231814   64021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:14:38.253660   64021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:14:38.266846   64021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:14:38.398818   64021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:14:38.533241   64021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:14:38.547544   64021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:14:38.566689   64021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:14:38.566757   64021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:14:38.576913   64021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:14:38.576978   64021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:14:38.587070   64021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:14:38.596978   64021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:14:38.606816   64021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:14:38.616457   64021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:14:38.626188   64021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:14:38.636847   64021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:14:38.646394   64021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:14:38.655334   64021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:14:38.663888   64021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:14:38.793053   64021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:14:39.078949   64021 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:14:39.079014   64021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:14:39.083849   64021 start.go:563] Will wait 60s for crictl version
	I0723 15:14:39.083903   64021 ssh_runner.go:195] Run: which crictl
	I0723 15:14:39.087696   64021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:14:39.124904   64021 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:14:39.124978   64021 ssh_runner.go:195] Run: crio --version
	I0723 15:14:39.151142   64021 ssh_runner.go:195] Run: crio --version
	I0723 15:14:39.179513   64021 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:14:39.180565   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) Calling .GetIP
	I0723 15:14:39.183436   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:39.183908   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:94:16", ip: ""} in network mk-kubernetes-upgrade-503350: {Iface:virbr3 ExpiryTime:2024-07-23 16:14:00 +0000 UTC Type:0 Mac:52:54:00:ae:94:16 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:kubernetes-upgrade-503350 Clientid:01:52:54:00:ae:94:16}
	I0723 15:14:39.183938   64021 main.go:141] libmachine: (kubernetes-upgrade-503350) DBG | domain kubernetes-upgrade-503350 has defined IP address 192.168.61.132 and MAC address 52:54:00:ae:94:16 in network mk-kubernetes-upgrade-503350
	I0723 15:14:39.184140   64021 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:14:39.188305   64021 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:14:39.188431   64021 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:14:39.188504   64021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:14:39.227194   64021 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:14:39.227217   64021 crio.go:433] Images already preloaded, skipping extraction
	I0723 15:14:39.227273   64021 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:14:39.262947   64021 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:14:39.262975   64021 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:14:39.262984   64021 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:14:39.263093   64021 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-503350 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:14:39.263155   64021 ssh_runner.go:195] Run: crio config
	I0723 15:14:39.306780   64021 cni.go:84] Creating CNI manager for ""
	I0723 15:14:39.306803   64021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:14:39.306815   64021 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:14:39.306833   64021 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-503350 NodeName:kubernetes-upgrade-503350 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:14:39.306952   64021 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-503350"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:14:39.307007   64021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:14:39.316612   64021 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:14:39.316679   64021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:14:39.325788   64021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0723 15:14:39.340779   64021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:14:39.355991   64021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0723 15:14:39.370941   64021 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I0723 15:14:39.374544   64021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:14:39.517239   64021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:14:39.532419   64021 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350 for IP: 192.168.61.132
	I0723 15:14:39.532455   64021 certs.go:194] generating shared ca certs ...
	I0723 15:14:39.532472   64021 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:14:39.532635   64021 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:14:39.532677   64021 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:14:39.532688   64021 certs.go:256] generating profile certs ...
	I0723 15:14:39.532771   64021 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/client.key
	I0723 15:14:39.532823   64021 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key.70e02c22
	I0723 15:14:39.532868   64021 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.key
	I0723 15:14:39.532987   64021 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:14:39.533017   64021 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:14:39.533027   64021 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:14:39.533059   64021 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:14:39.533092   64021 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:14:39.533116   64021 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:14:39.533156   64021 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:14:39.533841   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:14:39.556262   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:14:39.579801   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:14:39.601032   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:14:39.623116   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0723 15:14:39.645932   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:14:39.671104   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:14:39.692536   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kubernetes-upgrade-503350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:14:39.714140   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:14:39.736262   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:14:39.822841   64021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:14:39.892432   64021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:14:39.983685   64021 ssh_runner.go:195] Run: openssl version
	I0723 15:14:39.990879   64021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:14:40.080187   64021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:14:40.138459   64021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:14:40.138537   64021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:14:40.215266   64021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:14:40.262640   64021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:14:40.343922   64021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:14:40.393382   64021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:14:40.393446   64021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:14:40.410539   64021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:14:40.494008   64021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:14:40.569796   64021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:14:40.617931   64021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:14:40.617995   64021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:14:40.643855   64021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:14:40.706332   64021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:14:40.722803   64021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:14:40.754119   64021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:14:40.766013   64021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:14:40.796388   64021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:14:40.815134   64021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:14:40.830456   64021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:14:40.845706   64021 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-503350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-503350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:14:40.845779   64021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:14:40.845849   64021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:14:41.068732   64021 cri.go:89] found id: "0060c2f124d5e13a8a22419874a50bfdb13bc6689dbbfa1324e8afe10bf0c8a4"
	I0723 15:14:41.068759   64021 cri.go:89] found id: "e6196856cbcfb2aeb87990aab6d3466badfd4914424fba19d928ed82aeee1f5d"
	I0723 15:14:41.068764   64021 cri.go:89] found id: "b4886a1017973a1e3dd07fd5c17b8437a845147b040b6492e8ec0fc8a42e9584"
	I0723 15:14:41.068769   64021 cri.go:89] found id: "ec459dc1563ee3435026508a2a67eb5e48057fa2dca7f17914322656a736c728"
	I0723 15:14:41.068773   64021 cri.go:89] found id: "48daab54500d7c604d94c9694d2d4f6ee8cc675ead660b7d5ddef2031af31ea0"
	I0723 15:14:41.068778   64021 cri.go:89] found id: "8376691001bf76041274e9e95451cddc329ca38508067833d560744d714c6743"
	I0723 15:14:41.068782   64021 cri.go:89] found id: "2d577bde18a6cd9881b89af30ea85380749de25ec9be9ad0e2654878f09728ca"
	I0723 15:14:41.068785   64021 cri.go:89] found id: "8206fc445458ada5870708d73ea43d34b66075cefa73ad7479d57ecde77bcc5b"
	I0723 15:14:41.068789   64021 cri.go:89] found id: "fe5b588f7b89ee9b74dcceea448d2c1afb6e55200c861bd23550eb781d6cd7ab"
	I0723 15:14:41.068797   64021 cri.go:89] found id: "1f8d03c712f441018b600498ca9dea7192b6a6191999ffae78595c3383f8e901"
	I0723 15:14:41.068801   64021 cri.go:89] found id: "daa7241c19684a989b79150a32a2cf780c584bdbe6aeebe0c998f12b5fb7eeff"
	I0723 15:14:41.068805   64021 cri.go:89] found id: "b5a57a5d54f891052252c67a62d0cd9f74e25f9b7f4e2c5f382450d4de83ec78"
	I0723 15:14:41.068809   64021 cri.go:89] found id: "e7233eafd2a5c8c2c35b44b23313e9ad74dfd8d1e9a0024ed7327a2f036cab0e"
	I0723 15:14:41.068814   64021 cri.go:89] found id: ""
	I0723 15:14:41.068861   64021 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.569393340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2850a741-b370-457b-852f-21626d6e5b7d name=/runtime.v1.RuntimeService/Version
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.570317260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c6ffa33-f203-47d4-980c-2f0bf9a412e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.570693459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721747701570669896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c6ffa33-f203-47d4-980c-2f0bf9a412e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.571167839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=416fd516-dae5-4acc-8d61-606f4e4c9ad8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.571239386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=416fd516-dae5-4acc-8d61-606f4e4c9ad8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.571651941Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2beea616edb78fa9397e6030a7b4eca08ede96619b3ccab0c0ee702f21df04de,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698681962648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dae1e18ba10de320baa135161112c70f9e6005ac7d2022bf5ca92426aad2e19,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721747698701289490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4419f97c9daa83d14097b2086587fdeefd9e300ca158dc1320f866e39b52b791,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721747698667050677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9349278090d804a17b012bf52ed2a64f1eef0def4db6d199f82bb4e6de4be7a1,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698661763476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-19
75ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c671158bc654bab1d70dfa563e16491fe3dd83128613383fe325757beeaed81,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721747694835197522,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b8702018d53119f1da5820a97adbd202584eb08ec7d3fa44ab8a4a120a50a7,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721747694856640
996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5aa302ca1bd4cce12f9f2e9f4351655636976a66847909157da7c71d4d15,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172
1747694815475003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165c93a64466e4fb2d528be3a40a518409fe6b7cb7f00fbab10591b46ad6e91a,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172174769482352291
2,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fbaa78df21cc53c302b48182c91d03ec03e265a157327e86dc261aa721a9fc,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681026459252,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-1975ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9eccd0ed4189b3adcb2f49e48f352b0d426f69769b14a3744fc30d125820ce,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681048498009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91c0cf1db019213d4a01e86692ec1bf436a3fbd8f6c8ed04957023f51f32df7,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624
610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721747680311607669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0060c2f124d5e13a8a22419874a50bfdb13bc6689dbbfa1324e8afe10bf0c8a4,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721747680350961038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6196856cbcfb2aeb87990aab6d3466badfd4914424fba19d928ed82aeee1f5d,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721747680316872480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4886a1017973a1e3dd07fd5c17b8437a845147b040b6492e8ec0fc8a42e9584,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attemp
t:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721747680267184613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec459dc1563ee3435026508a2a67eb5e48057fa2dca7f17914322656a736c728,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721747680235068313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48daab54500d7c604d94c9694d2d4f6ee8cc675ead660b7d5ddef2031af31ea0,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721747680073273411,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=416fd516-dae5-4acc-8d61-606f4e4c9ad8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.591615046Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b30f95db-f766-4fec-95ae-03c7c0e0e96e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.591877343Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-hf85b,Uid:cef0286a-76cb-4e3c-a61c-af9d3a456323,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747680266845490,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:14:29.313633362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2fec828c-020e-4003-91ea-ddc1443c1372,Namespace:kub
e-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747679979037615,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"h
ostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-23T15:14:30.249663686Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-w8prp,Uid:2567166a-b025-4c9d-86b0-1975ea76a7e0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747679970278672,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-1975ea76a7e0,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:14:29.258283703Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:839db7d2520385623e67d945a3e8afd2689568484524624610fd715741233c7c,Metadata:&PodSandboxMetadata{Name:kube-proxy-wqjvx,Uid:da349c9c-9f3b-405c-9f3c-55bdf51a3c00,N
amespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747679963664831,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:14:29.082690068Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-503350,Uid:d2bcff8f7f11c51a0ede549315258c55,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747679913576140,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549
315258c55,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.132:8443,kubernetes.io/config.hash: d2bcff8f7f11c51a0ede549315258c55,kubernetes.io/config.seen: 2024-07-23T15:14:15.595623405Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-503350,Uid:68c3b160486dd048171b661c3a0936e1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747679912685784,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.132:2379,kubernetes.io/config.hash: 68c3b160486dd048171b661c3a0936e1,kubernetes.io/config.s
een: 2024-07-23T15:14:15.651647179Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-503350,Uid:59a3725899190b8759ff8cb4b1cd473e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747679829266487,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59a3725899190b8759ff8cb4b1cd473e,kubernetes.io/config.seen: 2024-07-23T15:14:15.595626802Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-503350,Uid:1
e15d2b8f3b50c7ca58005a07335b6a4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721747679786680200,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1e15d2b8f3b50c7ca58005a07335b6a4,kubernetes.io/config.seen: 2024-07-23T15:14:15.595627845Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b30f95db-f766-4fec-95ae-03c7c0e0e96e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.592398532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e013270-e6cc-4d06-903e-26bd2e7bfd6f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.592466710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e013270-e6cc-4d06-903e-26bd2e7bfd6f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.592812111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2beea616edb78fa9397e6030a7b4eca08ede96619b3ccab0c0ee702f21df04de,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698681962648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dae1e18ba10de320baa135161112c70f9e6005ac7d2022bf5ca92426aad2e19,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721747698701289490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4419f97c9daa83d14097b2086587fdeefd9e300ca158dc1320f866e39b52b791,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721747698667050677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9349278090d804a17b012bf52ed2a64f1eef0def4db6d199f82bb4e6de4be7a1,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698661763476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-19
75ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c671158bc654bab1d70dfa563e16491fe3dd83128613383fe325757beeaed81,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721747694835197522,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b8702018d53119f1da5820a97adbd202584eb08ec7d3fa44ab8a4a120a50a7,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721747694856640
996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5aa302ca1bd4cce12f9f2e9f4351655636976a66847909157da7c71d4d15,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172
1747694815475003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165c93a64466e4fb2d528be3a40a518409fe6b7cb7f00fbab10591b46ad6e91a,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172174769482352291
2,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fbaa78df21cc53c302b48182c91d03ec03e265a157327e86dc261aa721a9fc,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681026459252,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-1975ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9eccd0ed4189b3adcb2f49e48f352b0d426f69769b14a3744fc30d125820ce,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681048498009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91c0cf1db019213d4a01e86692ec1bf436a3fbd8f6c8ed04957023f51f32df7,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624
610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721747680311607669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0060c2f124d5e13a8a22419874a50bfdb13bc6689dbbfa1324e8afe10bf0c8a4,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721747680350961038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6196856cbcfb2aeb87990aab6d3466badfd4914424fba19d928ed82aeee1f5d,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721747680316872480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4886a1017973a1e3dd07fd5c17b8437a845147b040b6492e8ec0fc8a42e9584,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attemp
t:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721747680267184613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec459dc1563ee3435026508a2a67eb5e48057fa2dca7f17914322656a736c728,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721747680235068313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48daab54500d7c604d94c9694d2d4f6ee8cc675ead660b7d5ddef2031af31ea0,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721747680073273411,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e013270-e6cc-4d06-903e-26bd2e7bfd6f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.616221126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=019c4b59-5e53-4fce-917b-d43a404676b3 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.616304995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=019c4b59-5e53-4fce-917b-d43a404676b3 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.618038541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c16a5ca4-e361-4c37-a59a-904a7680383e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.618403438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721747701618382770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c16a5ca4-e361-4c37-a59a-904a7680383e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.618922745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbbe7b12-d184-44b2-8b73-fa60c827dc87 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.618992351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbbe7b12-d184-44b2-8b73-fa60c827dc87 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.619296515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2beea616edb78fa9397e6030a7b4eca08ede96619b3ccab0c0ee702f21df04de,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698681962648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dae1e18ba10de320baa135161112c70f9e6005ac7d2022bf5ca92426aad2e19,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721747698701289490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4419f97c9daa83d14097b2086587fdeefd9e300ca158dc1320f866e39b52b791,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721747698667050677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9349278090d804a17b012bf52ed2a64f1eef0def4db6d199f82bb4e6de4be7a1,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698661763476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-19
75ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c671158bc654bab1d70dfa563e16491fe3dd83128613383fe325757beeaed81,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721747694835197522,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b8702018d53119f1da5820a97adbd202584eb08ec7d3fa44ab8a4a120a50a7,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721747694856640
996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5aa302ca1bd4cce12f9f2e9f4351655636976a66847909157da7c71d4d15,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172
1747694815475003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165c93a64466e4fb2d528be3a40a518409fe6b7cb7f00fbab10591b46ad6e91a,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172174769482352291
2,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fbaa78df21cc53c302b48182c91d03ec03e265a157327e86dc261aa721a9fc,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681026459252,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-1975ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9eccd0ed4189b3adcb2f49e48f352b0d426f69769b14a3744fc30d125820ce,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681048498009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91c0cf1db019213d4a01e86692ec1bf436a3fbd8f6c8ed04957023f51f32df7,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624
610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721747680311607669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0060c2f124d5e13a8a22419874a50bfdb13bc6689dbbfa1324e8afe10bf0c8a4,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721747680350961038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6196856cbcfb2aeb87990aab6d3466badfd4914424fba19d928ed82aeee1f5d,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721747680316872480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4886a1017973a1e3dd07fd5c17b8437a845147b040b6492e8ec0fc8a42e9584,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attemp
t:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721747680267184613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec459dc1563ee3435026508a2a67eb5e48057fa2dca7f17914322656a736c728,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721747680235068313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48daab54500d7c604d94c9694d2d4f6ee8cc675ead660b7d5ddef2031af31ea0,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721747680073273411,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbbe7b12-d184-44b2-8b73-fa60c827dc87 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.651169105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0df14658-0296-41d8-856e-907b5cec2040 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.651242748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0df14658-0296-41d8-856e-907b5cec2040 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.653567646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdcc91f6-5601-4713-8ff9-ec5380bd76bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.654009675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721747701653977212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdcc91f6-5601-4713-8ff9-ec5380bd76bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.654660328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00766e54-5ed8-4229-bda8-8dcc9e3a5778 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.654757909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00766e54-5ed8-4229-bda8-8dcc9e3a5778 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:15:01 kubernetes-upgrade-503350 crio[2269]: time="2024-07-23 15:15:01.655081480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2beea616edb78fa9397e6030a7b4eca08ede96619b3ccab0c0ee702f21df04de,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698681962648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dae1e18ba10de320baa135161112c70f9e6005ac7d2022bf5ca92426aad2e19,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721747698701289490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4419f97c9daa83d14097b2086587fdeefd9e300ca158dc1320f866e39b52b791,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721747698667050677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9349278090d804a17b012bf52ed2a64f1eef0def4db6d199f82bb4e6de4be7a1,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721747698661763476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-19
75ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c671158bc654bab1d70dfa563e16491fe3dd83128613383fe325757beeaed81,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721747694835197522,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b8702018d53119f1da5820a97adbd202584eb08ec7d3fa44ab8a4a120a50a7,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721747694856640
996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5aa302ca1bd4cce12f9f2e9f4351655636976a66847909157da7c71d4d15,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:172
1747694815475003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:165c93a64466e4fb2d528be3a40a518409fe6b7cb7f00fbab10591b46ad6e91a,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172174769482352291
2,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fbaa78df21cc53c302b48182c91d03ec03e265a157327e86dc261aa721a9fc,PodSandboxId:160161b12e6247be39ac460c756b3fda09c33b09d9c71e6c88bd85704defe06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681026459252,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w8prp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2567166a-b025-4c9d-86b0-1975ea76a7e0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9eccd0ed4189b3adcb2f49e48f352b0d426f69769b14a3744fc30d125820ce,PodSandboxId:0b521b0beda5bd3e8b99b69490c1b33bb8665cfecc9ef9e21508d29f98408bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721747681048498009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hf85b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cef0286a-76cb-4e3c-a61c-af9d3a456323,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91c0cf1db019213d4a01e86692ec1bf436a3fbd8f6c8ed04957023f51f32df7,PodSandboxId:839db7d2520385623e67d945a3e8afd2689568484524624
610fd715741233c7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721747680311607669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqjvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da349c9c-9f3b-405c-9f3c-55bdf51a3c00,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0060c2f124d5e13a8a22419874a50bfdb13bc6689dbbfa1324e8afe10bf0c8a4,PodSandboxId:fcf8612efcb9d1dfec484623908ac0bac7fee957328e19cbc8217492dafe0f1a,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721747680350961038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c3b160486dd048171b661c3a0936e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6196856cbcfb2aeb87990aab6d3466badfd4914424fba19d928ed82aeee1f5d,PodSandboxId:d3252982dd43ae358fe156c6df56a9dcaeacc33d49cb39f4aa071156f9751c1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721747680316872480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2bcff8f7f11c51a0ede549315258c55,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4886a1017973a1e3dd07fd5c17b8437a845147b040b6492e8ec0fc8a42e9584,PodSandboxId:01bedcc237837774f4ede39efbf87a872efe42d61499f770d09a66b520c5e474,Metadata:&ContainerMetadata{Name:storage-provisioner,Attemp
t:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721747680267184613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fec828c-020e-4003-91ea-ddc1443c1372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec459dc1563ee3435026508a2a67eb5e48057fa2dca7f17914322656a736c728,PodSandboxId:4c7b16ed12ff673b447829db78296383a9607a76a1b160b2f13525edd42a0b4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721747680235068313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a3725899190b8759ff8cb4b1cd473e,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48daab54500d7c604d94c9694d2d4f6ee8cc675ead660b7d5ddef2031af31ea0,PodSandboxId:8fe60f9d5b186084aa961281c5030918d2e073e2c28742729356e05f04f816e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721747680073273411,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-503350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e15d2b8f3b50c7ca58005a07335b6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00766e54-5ed8-4229-bda8-8dcc9e3a5778 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4dae1e18ba10d       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago       Running             kube-proxy                2                   839db7d252038       kube-proxy-wqjvx
	2beea616edb78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   0b521b0beda5b       coredns-5cfdc65f69-hf85b
	4419f97c9daa8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   01bedcc237837       storage-provisioner
	9349278090d80       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   160161b12e624       coredns-5cfdc65f69-w8prp
	08b8702018d53       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   6 seconds ago       Running             kube-controller-manager   2                   4c7b16ed12ff6       kube-controller-manager-kubernetes-upgrade-503350
	4c671158bc654       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   6 seconds ago       Running             kube-scheduler            2                   8fe60f9d5b186       kube-scheduler-kubernetes-upgrade-503350
	165c93a64466e       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   6 seconds ago       Running             etcd                      2                   fcf8612efcb9d       etcd-kubernetes-upgrade-503350
	730a5aa302ca1       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   6 seconds ago       Running             kube-apiserver            2                   d3252982dd43a       kube-apiserver-kubernetes-upgrade-503350
	ca9eccd0ed418       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Exited              coredns                   1                   0b521b0beda5b       coredns-5cfdc65f69-hf85b
	c9fbaa78df21c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Exited              coredns                   1                   160161b12e624       coredns-5cfdc65f69-w8prp
	0060c2f124d5e       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   21 seconds ago      Exited              etcd                      1                   fcf8612efcb9d       etcd-kubernetes-upgrade-503350
	e6196856cbcfb       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   21 seconds ago      Exited              kube-apiserver            1                   d3252982dd43a       kube-apiserver-kubernetes-upgrade-503350
	e91c0cf1db019       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   21 seconds ago      Exited              kube-proxy                1                   839db7d252038       kube-proxy-wqjvx
	b4886a1017973       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago      Exited              storage-provisioner       1                   01bedcc237837       storage-provisioner
	ec459dc1563ee       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   21 seconds ago      Exited              kube-controller-manager   1                   4c7b16ed12ff6       kube-controller-manager-kubernetes-upgrade-503350
	48daab54500d7       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   21 seconds ago      Exited              kube-scheduler            1                   8fe60f9d5b186       kube-scheduler-kubernetes-upgrade-503350
	
	
	==> coredns [2beea616edb78fa9397e6030a7b4eca08ede96619b3ccab0c0ee702f21df04de] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9349278090d804a17b012bf52ed2a64f1eef0def4db6d199f82bb4e6de4be7a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c9fbaa78df21cc53c302b48182c91d03ec03e265a157327e86dc261aa721a9fc] <==
	
	
	==> coredns [ca9eccd0ed4189b3adcb2f49e48f352b0d426f69769b14a3744fc30d125820ce] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-503350
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-503350
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:14:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-503350
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:14:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:14:58 +0000   Tue, 23 Jul 2024 15:14:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:14:58 +0000   Tue, 23 Jul 2024 15:14:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:14:58 +0000   Tue, 23 Jul 2024 15:14:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:14:58 +0000   Tue, 23 Jul 2024 15:14:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.132
	  Hostname:    kubernetes-upgrade-503350
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6839eed77fc54fe6a07848ef6aee1a18
	  System UUID:                6839eed7-7fc5-4fe6-a078-48ef6aee1a18
	  Boot ID:                    4ece7e8d-321e-43eb-9b79-0106f0e77bc8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-hf85b                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     33s
	  kube-system                 coredns-5cfdc65f69-w8prp                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     33s
	  kube-system                 etcd-kubernetes-upgrade-503350                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         35s
	  kube-system                 kube-apiserver-kubernetes-upgrade-503350             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-503350    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-proxy-wqjvx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-scheduler-kubernetes-upgrade-503350             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 32s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  44s (x8 over 47s)  kubelet          Node kubernetes-upgrade-503350 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     44s (x7 over 47s)  kubelet          Node kubernetes-upgrade-503350 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    44s (x8 over 47s)  kubelet          Node kubernetes-upgrade-503350 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           34s                node-controller  Node kubernetes-upgrade-503350 event: Registered Node kubernetes-upgrade-503350 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x6 over 8s)    kubelet          Node kubernetes-upgrade-503350 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x5 over 8s)    kubelet          Node kubernetes-upgrade-503350 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x5 over 8s)    kubelet          Node kubernetes-upgrade-503350 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-503350 event: Registered Node kubernetes-upgrade-503350 in Controller
	
	
	==> dmesg <==
	[Jul23 15:14] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.475079] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.054471] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051570] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.189443] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.126258] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.256492] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +3.735176] systemd-fstab-generator[725]: Ignoring "noauto" option for root device
	[  +1.714414] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[  +0.064152] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.679479] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.114108] kauditd_printk_skb: 69 callbacks suppressed
	[  +8.744238] systemd-fstab-generator[2188]: Ignoring "noauto" option for root device
	[  +0.077022] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.055471] systemd-fstab-generator[2200]: Ignoring "noauto" option for root device
	[  +0.165409] systemd-fstab-generator[2214]: Ignoring "noauto" option for root device
	[  +0.136304] systemd-fstab-generator[2226]: Ignoring "noauto" option for root device
	[  +0.252387] systemd-fstab-generator[2254]: Ignoring "noauto" option for root device
	[  +0.718582] systemd-fstab-generator[2407]: Ignoring "noauto" option for root device
	[ +12.632700] kauditd_printk_skb: 231 callbacks suppressed
	[  +2.066572] systemd-fstab-generator[3485]: Ignoring "noauto" option for root device
	[  +4.632249] kauditd_printk_skb: 45 callbacks suppressed
	[  +1.093365] systemd-fstab-generator[4021]: Ignoring "noauto" option for root device
	
	
	==> etcd [0060c2f124d5e13a8a22419874a50bfdb13bc6689dbbfa1324e8afe10bf0c8a4] <==
	{"level":"info","ts":"2024-07-23T15:14:40.887275Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-23T15:14:40.917434Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"39bbf28f78c30ebd","local-member-id":"2ce4a813e4d55e4e","commit-index":385}
	{"level":"info","ts":"2024-07-23T15:14:40.917555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-23T15:14:40.917608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e became follower at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:40.917632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2ce4a813e4d55e4e [peers: [], term: 2, commit: 385, applied: 0, lastindex: 385, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-23T15:14:40.93305Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-23T15:14:40.99527Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":376}
	{"level":"info","ts":"2024-07-23T15:14:41.001032Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-23T15:14:41.038083Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2ce4a813e4d55e4e","timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:14:41.038342Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2ce4a813e4d55e4e"}
	{"level":"info","ts":"2024-07-23T15:14:41.03838Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"2ce4a813e4d55e4e","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-23T15:14:41.038989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-23T15:14:41.043334Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-23T15:14:41.043502Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:14:41.043555Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:14:41.043564Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:14:41.043813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e switched to configuration voters=(3234895235755892302)"}
	{"level":"info","ts":"2024-07-23T15:14:41.043876Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"39bbf28f78c30ebd","local-member-id":"2ce4a813e4d55e4e","added-peer-id":"2ce4a813e4d55e4e","added-peer-peer-urls":["https://192.168.61.132:2380"]}
	{"level":"info","ts":"2024-07-23T15:14:41.043996Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"39bbf28f78c30ebd","local-member-id":"2ce4a813e4d55e4e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:14:41.044077Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:14:41.045176Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T15:14:41.045377Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2ce4a813e4d55e4e","initial-advertise-peer-urls":["https://192.168.61.132:2380"],"listen-peer-urls":["https://192.168.61.132:2380"],"advertise-client-urls":["https://192.168.61.132:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.132:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T15:14:41.045413Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T15:14:41.045468Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.132:2380"}
	{"level":"info","ts":"2024-07-23T15:14:41.045487Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.132:2380"}
	
	
	==> etcd [165c93a64466e4fb2d528be3a40a518409fe6b7cb7f00fbab10591b46ad6e91a] <==
	{"level":"info","ts":"2024-07-23T15:14:55.140219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e switched to configuration voters=(3234895235755892302)"}
	{"level":"info","ts":"2024-07-23T15:14:55.140283Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"39bbf28f78c30ebd","local-member-id":"2ce4a813e4d55e4e","added-peer-id":"2ce4a813e4d55e4e","added-peer-peer-urls":["https://192.168.61.132:2380"]}
	{"level":"info","ts":"2024-07-23T15:14:55.140388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"39bbf28f78c30ebd","local-member-id":"2ce4a813e4d55e4e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:14:55.140428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:14:55.149184Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T15:14:55.149676Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.132:2380"}
	{"level":"info","ts":"2024-07-23T15:14:55.149806Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.132:2380"}
	{"level":"info","ts":"2024-07-23T15:14:55.151108Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2ce4a813e4d55e4e","initial-advertise-peer-urls":["https://192.168.61.132:2380"],"listen-peer-urls":["https://192.168.61.132:2380"],"advertise-client-urls":["https://192.168.61.132:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.132:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T15:14:55.151236Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T15:14:56.716193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:56.716333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:56.716373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e received MsgPreVoteResp from 2ce4a813e4d55e4e at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:56.716404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:56.716428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e received MsgVoteResp from 2ce4a813e4d55e4e at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:56.716455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ce4a813e4d55e4e became leader at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:56.716482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2ce4a813e4d55e4e elected leader 2ce4a813e4d55e4e at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:56.721274Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2ce4a813e4d55e4e","local-member-attributes":"{Name:kubernetes-upgrade-503350 ClientURLs:[https://192.168.61.132:2379]}","request-path":"/0/members/2ce4a813e4d55e4e/attributes","cluster-id":"39bbf28f78c30ebd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:14:56.721366Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:14:56.721577Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:14:56.721615Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:14:56.721641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:14:56.722411Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-23T15:14:56.722604Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-23T15:14:56.723433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T15:14:56.723485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.132:2379"}
	
	
	==> kernel <==
	 15:15:02 up 1 min,  0 users,  load average: 1.37, 0.36, 0.12
	Linux kubernetes-upgrade-503350 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [730a5aa302ca1bd4cce12f9f2e9f4351655636976a66847909157da7c71d4d15] <==
	I0723 15:14:57.985641       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0723 15:14:57.985902       1 shared_informer.go:320] Caches are synced for configmaps
	I0723 15:14:57.986051       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 15:14:57.986326       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0723 15:14:57.986384       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0723 15:14:57.990996       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 15:14:57.992890       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 15:14:57.992918       1 policy_source.go:224] refreshing policies
	I0723 15:14:57.996917       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0723 15:14:58.004359       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 15:14:58.005136       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 15:14:58.007931       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 15:14:58.008037       1 aggregator.go:171] initial CRD sync complete...
	I0723 15:14:58.008111       1 autoregister_controller.go:144] Starting autoregister controller
	I0723 15:14:58.008136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 15:14:58.008157       1 cache.go:39] Caches are synced for autoregister controller
	E0723 15:14:58.023043       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0723 15:14:58.908635       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0723 15:14:59.592306       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0723 15:14:59.614484       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0723 15:14:59.659086       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 15:14:59.744047       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0723 15:14:59.751897       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0723 15:15:01.698299       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 15:15:01.726213       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e6196856cbcfb2aeb87990aab6d3466badfd4914424fba19d928ed82aeee1f5d] <==
	I0723 15:14:40.935009       1 options.go:228] external host was not specified, using 192.168.61.132
	I0723 15:14:40.960205       1 server.go:142] Version: v1.31.0-beta.0
	I0723 15:14:40.960259       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:42.007527       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0723 15:14:42.029260       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:42.029427       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0723 15:14:42.050154       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 15:14:42.057973       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0723 15:14:42.058004       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0723 15:14:42.058210       1 instance.go:231] Using reconciler: lease
	W0723 15:14:42.061098       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:43.029798       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:43.029813       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:43.061760       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:44.573295       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:44.625173       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:44.659029       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:46.986217       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:47.346127       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:47.409002       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:50.764852       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:51.464177       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:14:51.750031       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [08b8702018d53119f1da5820a97adbd202584eb08ec7d3fa44ab8a4a120a50a7] <==
	I0723 15:15:01.713801       1 shared_informer.go:320] Caches are synced for service account
	I0723 15:15:01.720075       1 shared_informer.go:320] Caches are synced for namespace
	I0723 15:15:01.764088       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0723 15:15:01.764149       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0723 15:15:01.765375       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0723 15:15:01.765386       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0723 15:15:01.783995       1 shared_informer.go:320] Caches are synced for persistent volume
	I0723 15:15:01.807293       1 shared_informer.go:320] Caches are synced for attach detach
	I0723 15:15:01.812796       1 shared_informer.go:320] Caches are synced for cronjob
	I0723 15:15:01.813990       1 shared_informer.go:320] Caches are synced for taint
	I0723 15:15:01.814141       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0723 15:15:01.814220       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-503350"
	I0723 15:15:01.814287       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0723 15:15:01.864409       1 shared_informer.go:320] Caches are synced for daemon sets
	I0723 15:15:01.915778       1 shared_informer.go:320] Caches are synced for stateful set
	I0723 15:15:01.972303       1 shared_informer.go:320] Caches are synced for deployment
	I0723 15:15:01.978781       1 shared_informer.go:320] Caches are synced for disruption
	I0723 15:15:01.998818       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0723 15:15:02.035508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="36.516296ms"
	I0723 15:15:02.037166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="73.857µs"
	I0723 15:15:02.057503       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0723 15:15:02.358822       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 15:15:02.365220       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 15:15:02.365259       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0723 15:15:02.373523       1 shared_informer.go:320] Caches are synced for resource quota
	
	
	==> kube-controller-manager [ec459dc1563ee3435026508a2a67eb5e48057fa2dca7f17914322656a736c728] <==
	I0723 15:14:41.974190       1 serving.go:386] Generated self-signed cert in-memory
	I0723 15:14:42.367228       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0723 15:14:42.367259       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:42.368545       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0723 15:14:42.368690       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0723 15:14:42.368845       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0723 15:14:42.369013       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [4dae1e18ba10de320baa135161112c70f9e6005ac7d2022bf5ca92426aad2e19] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0723 15:14:59.014826       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0723 15:14:59.031062       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.132"]
	E0723 15:14:59.031132       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0723 15:14:59.086671       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0723 15:14:59.087641       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 15:14:59.087742       1 server_linux.go:170] "Using iptables Proxier"
	I0723 15:14:59.091667       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0723 15:14:59.091976       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0723 15:14:59.092002       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:59.094075       1 config.go:197] "Starting service config controller"
	I0723 15:14:59.094097       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:14:59.094110       1 config.go:104] "Starting endpoint slice config controller"
	I0723 15:14:59.094113       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:14:59.094901       1 config.go:326] "Starting node config controller"
	I0723 15:14:59.094961       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:14:59.194232       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:14:59.194271       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:14:59.195326       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e91c0cf1db019213d4a01e86692ec1bf436a3fbd8f6c8ed04957023f51f32df7] <==
	I0723 15:14:42.264500       1 server_linux.go:67] "Using iptables proxy"
	E0723 15:14:42.282276       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0723 15:14:42.311169       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	
	
	==> kube-scheduler [48daab54500d7c604d94c9694d2d4f6ee8cc675ead660b7d5ddef2031af31ea0] <==
	I0723 15:14:41.724580       1 serving.go:386] Generated self-signed cert in-memory
	W0723 15:14:52.857211       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.61.132:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.61.132:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.132:33608->192.168.61.132:8443: read: connection reset by peer
	W0723 15:14:52.857293       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 15:14:52.857306       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 15:14:52.864104       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0723 15:14:52.864127       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0723 15:14:52.864143       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0723 15:14:52.866081       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0723 15:14:52.866144       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	E0723 15:14:52.866217       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4c671158bc654bab1d70dfa563e16491fe3dd83128613383fe325757beeaed81] <==
	I0723 15:14:55.801194       1 serving.go:386] Generated self-signed cert in-memory
	W0723 15:14:57.935218       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 15:14:57.935289       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 15:14:57.935298       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 15:14:57.935304       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 15:14:58.020754       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0723 15:14:58.020776       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:58.027611       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 15:14:58.027769       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 15:14:58.027786       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:14:58.027809       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0723 15:14:58.128545       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:54.549105    3492 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2bcff8f7f11c51a0ede549315258c55-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-503350\" (UID: \"d2bcff8f7f11c51a0ede549315258c55\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-503350"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:54.549119    3492 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2bcff8f7f11c51a0ede549315258c55-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-503350\" (UID: \"d2bcff8f7f11c51a0ede549315258c55\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-503350"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:54.652536    3492 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-503350"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: E0723 15:14:54.653607    3492 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.132:8443: connect: connection refused" node="kubernetes-upgrade-503350"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:54.789964    3492 scope.go:117] "RemoveContainer" containerID="ec459dc1563ee3435026508a2a67eb5e48057fa2dca7f17914322656a736c728"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:54.791447    3492 scope.go:117] "RemoveContainer" containerID="48daab54500d7c604d94c9694d2d4f6ee8cc675ead660b7d5ddef2031af31ea0"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:54.793021    3492 scope.go:117] "RemoveContainer" containerID="0060c2f124d5e13a8a22419874a50bfdb13bc6689dbbfa1324e8afe10bf0c8a4"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:54.793885    3492 scope.go:117] "RemoveContainer" containerID="e6196856cbcfb2aeb87990aab6d3466badfd4914424fba19d928ed82aeee1f5d"
	Jul 23 15:14:54 kubernetes-upgrade-503350 kubelet[3492]: E0723 15:14:54.947484    3492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-503350?timeout=10s\": dial tcp 192.168.61.132:8443: connect: connection refused" interval="800ms"
	Jul 23 15:14:55 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:55.055787    3492 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-503350"
	Jul 23 15:14:55 kubernetes-upgrade-503350 kubelet[3492]: E0723 15:14:55.056649    3492 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.132:8443: connect: connection refused" node="kubernetes-upgrade-503350"
	Jul 23 15:14:55 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:55.858659    3492 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-503350"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.093470    3492 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-503350"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.093586    3492 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-503350"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.093613    3492 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.094378    3492 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.336884    3492 apiserver.go:52] "Watching apiserver"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.389585    3492 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da349c9c-9f3b-405c-9f3c-55bdf51a3c00-lib-modules\") pod \"kube-proxy-wqjvx\" (UID: \"da349c9c-9f3b-405c-9f3c-55bdf51a3c00\") " pod="kube-system/kube-proxy-wqjvx"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.390214    3492 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da349c9c-9f3b-405c-9f3c-55bdf51a3c00-xtables-lock\") pod \"kube-proxy-wqjvx\" (UID: \"da349c9c-9f3b-405c-9f3c-55bdf51a3c00\") " pod="kube-system/kube-proxy-wqjvx"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.446632    3492 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.492888    3492 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2fec828c-020e-4003-91ea-ddc1443c1372-tmp\") pod \"storage-provisioner\" (UID: \"2fec828c-020e-4003-91ea-ddc1443c1372\") " pod="kube-system/storage-provisioner"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.646886    3492 scope.go:117] "RemoveContainer" containerID="c9fbaa78df21cc53c302b48182c91d03ec03e265a157327e86dc261aa721a9fc"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.654124    3492 scope.go:117] "RemoveContainer" containerID="b4886a1017973a1e3dd07fd5c17b8437a845147b040b6492e8ec0fc8a42e9584"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.655418    3492 scope.go:117] "RemoveContainer" containerID="e91c0cf1db019213d4a01e86692ec1bf436a3fbd8f6c8ed04957023f51f32df7"
	Jul 23 15:14:58 kubernetes-upgrade-503350 kubelet[3492]: I0723 15:14:58.657228    3492 scope.go:117] "RemoveContainer" containerID="ca9eccd0ed4189b3adcb2f49e48f352b0d426f69769b14a3744fc30d125820ce"
	
	
	==> storage-provisioner [4419f97c9daa83d14097b2086587fdeefd9e300ca158dc1320f866e39b52b791] <==
	I0723 15:14:58.855629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 15:14:58.889483       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 15:14:58.889634       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b4886a1017973a1e3dd07fd5c17b8437a845147b040b6492e8ec0fc8a42e9584] <==
	I0723 15:14:40.990601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0723 15:14:51.056537       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:15:01.173270   64389 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19319-11303/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-503350 -n kubernetes-upgrade-503350
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-503350 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-503350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-503350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-503350: (1.099987864s)
--- FAIL: TestKubernetesUpgrade (355.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (291.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-000272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0723 15:09:49.699488   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-000272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m51.059238799s)

                                                
                                                
-- stdout --
	* [old-k8s-version-000272] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-000272" primary control-plane node in "old-k8s-version-000272" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:09:40.227450   61145 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:09:40.227998   61145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:09:40.228054   61145 out.go:304] Setting ErrFile to fd 2...
	I0723 15:09:40.228072   61145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:09:40.228556   61145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:09:40.229557   61145 out.go:298] Setting JSON to false
	I0723 15:09:40.231063   61145 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6726,"bootTime":1721740654,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:09:40.231146   61145 start.go:139] virtualization: kvm guest
	I0723 15:09:40.233467   61145 out.go:177] * [old-k8s-version-000272] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:09:40.234931   61145 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:09:40.234967   61145 notify.go:220] Checking for updates...
	I0723 15:09:40.237447   61145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:09:40.238611   61145 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:09:40.239726   61145 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:09:40.240944   61145 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:09:40.242094   61145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:09:40.243761   61145 config.go:182] Loaded profile config "cert-expiration-457920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:09:40.243879   61145 config.go:182] Loaded profile config "kubernetes-upgrade-503350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:09:40.243978   61145 config.go:182] Loaded profile config "stopped-upgrade-193974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0723 15:09:40.244074   61145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:09:40.283441   61145 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 15:09:40.284626   61145 start.go:297] selected driver: kvm2
	I0723 15:09:40.284642   61145 start.go:901] validating driver "kvm2" against <nil>
	I0723 15:09:40.284656   61145 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:09:40.285385   61145 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:09:40.285485   61145 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:09:40.301561   61145 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:09:40.301633   61145 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 15:09:40.301906   61145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:09:40.301978   61145 cni.go:84] Creating CNI manager for ""
	I0723 15:09:40.301994   61145 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:09:40.302002   61145 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 15:09:40.302072   61145 start.go:340] cluster config:
	{Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:09:40.302185   61145 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:09:40.303916   61145 out.go:177] * Starting "old-k8s-version-000272" primary control-plane node in "old-k8s-version-000272" cluster
	I0723 15:09:40.305127   61145 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:09:40.305161   61145 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0723 15:09:40.305174   61145 cache.go:56] Caching tarball of preloaded images
	I0723 15:09:40.305253   61145 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:09:40.305267   61145 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0723 15:09:40.305401   61145 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:09:40.305426   61145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json: {Name:mkdd082cebfc33c5b2db5f82f8f995c1c6c725d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:09:40.305589   61145 start.go:360] acquireMachinesLock for old-k8s-version-000272: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:10:02.874773   61145 start.go:364] duration metric: took 22.569153581s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:10:02.874855   61145 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:10:02.874987   61145 start.go:125] createHost starting for "" (driver="kvm2")
	I0723 15:10:02.877170   61145 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0723 15:10:02.877367   61145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:10:02.877408   61145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:10:02.893882   61145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42059
	I0723 15:10:02.894313   61145 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:10:02.894868   61145 main.go:141] libmachine: Using API Version  1
	I0723 15:10:02.894889   61145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:10:02.895268   61145 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:10:02.895442   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:10:02.895588   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:02.895756   61145 start.go:159] libmachine.API.Create for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:10:02.895815   61145 client.go:168] LocalClient.Create starting
	I0723 15:10:02.895849   61145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 15:10:02.895896   61145 main.go:141] libmachine: Decoding PEM data...
	I0723 15:10:02.895919   61145 main.go:141] libmachine: Parsing certificate...
	I0723 15:10:02.895983   61145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 15:10:02.896007   61145 main.go:141] libmachine: Decoding PEM data...
	I0723 15:10:02.896024   61145 main.go:141] libmachine: Parsing certificate...
	I0723 15:10:02.896059   61145 main.go:141] libmachine: Running pre-create checks...
	I0723 15:10:02.896077   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .PreCreateCheck
	I0723 15:10:02.896425   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:10:02.896860   61145 main.go:141] libmachine: Creating machine...
	I0723 15:10:02.896879   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .Create
	I0723 15:10:02.897010   61145 main.go:141] libmachine: (old-k8s-version-000272) Creating KVM machine...
	I0723 15:10:02.898222   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found existing default KVM network
	I0723 15:10:02.899667   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:02.899523   61404 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:08:ac:0c} reservation:<nil>}
	I0723 15:10:02.900611   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:02.900535   61404 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a2970}
	I0723 15:10:02.900640   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | created network xml: 
	I0723 15:10:02.900651   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | <network>
	I0723 15:10:02.900662   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |   <name>mk-old-k8s-version-000272</name>
	I0723 15:10:02.900675   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |   <dns enable='no'/>
	I0723 15:10:02.900685   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |   
	I0723 15:10:02.900699   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0723 15:10:02.900713   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |     <dhcp>
	I0723 15:10:02.900731   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0723 15:10:02.900743   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |     </dhcp>
	I0723 15:10:02.900754   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |   </ip>
	I0723 15:10:02.900764   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG |   
	I0723 15:10:02.900776   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | </network>
	I0723 15:10:02.900796   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | 
	I0723 15:10:02.906064   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | trying to create private KVM network mk-old-k8s-version-000272 192.168.50.0/24...
	I0723 15:10:02.974959   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | private KVM network mk-old-k8s-version-000272 192.168.50.0/24 created
	I0723 15:10:02.975006   61145 main.go:141] libmachine: (old-k8s-version-000272) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272 ...
	I0723 15:10:02.975031   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:02.974914   61404 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:10:02.975054   61145 main.go:141] libmachine: (old-k8s-version-000272) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 15:10:02.975075   61145 main.go:141] libmachine: (old-k8s-version-000272) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 15:10:03.218647   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:03.218468   61404 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa...
	I0723 15:10:03.260954   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:03.260830   61404 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/old-k8s-version-000272.rawdisk...
	I0723 15:10:03.260985   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Writing magic tar header
	I0723 15:10:03.260998   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Writing SSH key tar header
	I0723 15:10:03.261012   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:03.260958   61404 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272 ...
	I0723 15:10:03.261135   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272
	I0723 15:10:03.261167   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 15:10:03.261180   61145 main.go:141] libmachine: (old-k8s-version-000272) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272 (perms=drwx------)
	I0723 15:10:03.261204   61145 main.go:141] libmachine: (old-k8s-version-000272) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 15:10:03.261218   61145 main.go:141] libmachine: (old-k8s-version-000272) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 15:10:03.261231   61145 main.go:141] libmachine: (old-k8s-version-000272) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 15:10:03.261241   61145 main.go:141] libmachine: (old-k8s-version-000272) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 15:10:03.261254   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:10:03.261265   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 15:10:03.261278   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 15:10:03.261318   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Checking permissions on dir: /home/jenkins
	I0723 15:10:03.261363   61145 main.go:141] libmachine: (old-k8s-version-000272) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 15:10:03.261378   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Checking permissions on dir: /home
	I0723 15:10:03.261386   61145 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:10:03.261399   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Skipping /home - not owner
	I0723 15:10:03.262362   61145 main.go:141] libmachine: (old-k8s-version-000272) define libvirt domain using xml: 
	I0723 15:10:03.262401   61145 main.go:141] libmachine: (old-k8s-version-000272) <domain type='kvm'>
	I0723 15:10:03.262439   61145 main.go:141] libmachine: (old-k8s-version-000272)   <name>old-k8s-version-000272</name>
	I0723 15:10:03.262463   61145 main.go:141] libmachine: (old-k8s-version-000272)   <memory unit='MiB'>2200</memory>
	I0723 15:10:03.262472   61145 main.go:141] libmachine: (old-k8s-version-000272)   <vcpu>2</vcpu>
	I0723 15:10:03.262487   61145 main.go:141] libmachine: (old-k8s-version-000272)   <features>
	I0723 15:10:03.262499   61145 main.go:141] libmachine: (old-k8s-version-000272)     <acpi/>
	I0723 15:10:03.262507   61145 main.go:141] libmachine: (old-k8s-version-000272)     <apic/>
	I0723 15:10:03.262528   61145 main.go:141] libmachine: (old-k8s-version-000272)     <pae/>
	I0723 15:10:03.262537   61145 main.go:141] libmachine: (old-k8s-version-000272)     
	I0723 15:10:03.262545   61145 main.go:141] libmachine: (old-k8s-version-000272)   </features>
	I0723 15:10:03.262556   61145 main.go:141] libmachine: (old-k8s-version-000272)   <cpu mode='host-passthrough'>
	I0723 15:10:03.262566   61145 main.go:141] libmachine: (old-k8s-version-000272)   
	I0723 15:10:03.262574   61145 main.go:141] libmachine: (old-k8s-version-000272)   </cpu>
	I0723 15:10:03.262588   61145 main.go:141] libmachine: (old-k8s-version-000272)   <os>
	I0723 15:10:03.262607   61145 main.go:141] libmachine: (old-k8s-version-000272)     <type>hvm</type>
	I0723 15:10:03.262617   61145 main.go:141] libmachine: (old-k8s-version-000272)     <boot dev='cdrom'/>
	I0723 15:10:03.262629   61145 main.go:141] libmachine: (old-k8s-version-000272)     <boot dev='hd'/>
	I0723 15:10:03.262637   61145 main.go:141] libmachine: (old-k8s-version-000272)     <bootmenu enable='no'/>
	I0723 15:10:03.262645   61145 main.go:141] libmachine: (old-k8s-version-000272)   </os>
	I0723 15:10:03.262650   61145 main.go:141] libmachine: (old-k8s-version-000272)   <devices>
	I0723 15:10:03.262659   61145 main.go:141] libmachine: (old-k8s-version-000272)     <disk type='file' device='cdrom'>
	I0723 15:10:03.262679   61145 main.go:141] libmachine: (old-k8s-version-000272)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/boot2docker.iso'/>
	I0723 15:10:03.262691   61145 main.go:141] libmachine: (old-k8s-version-000272)       <target dev='hdc' bus='scsi'/>
	I0723 15:10:03.262700   61145 main.go:141] libmachine: (old-k8s-version-000272)       <readonly/>
	I0723 15:10:03.262709   61145 main.go:141] libmachine: (old-k8s-version-000272)     </disk>
	I0723 15:10:03.262719   61145 main.go:141] libmachine: (old-k8s-version-000272)     <disk type='file' device='disk'>
	I0723 15:10:03.262734   61145 main.go:141] libmachine: (old-k8s-version-000272)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 15:10:03.262763   61145 main.go:141] libmachine: (old-k8s-version-000272)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/old-k8s-version-000272.rawdisk'/>
	I0723 15:10:03.262778   61145 main.go:141] libmachine: (old-k8s-version-000272)       <target dev='hda' bus='virtio'/>
	I0723 15:10:03.262785   61145 main.go:141] libmachine: (old-k8s-version-000272)     </disk>
	I0723 15:10:03.262797   61145 main.go:141] libmachine: (old-k8s-version-000272)     <interface type='network'>
	I0723 15:10:03.262809   61145 main.go:141] libmachine: (old-k8s-version-000272)       <source network='mk-old-k8s-version-000272'/>
	I0723 15:10:03.262817   61145 main.go:141] libmachine: (old-k8s-version-000272)       <model type='virtio'/>
	I0723 15:10:03.262828   61145 main.go:141] libmachine: (old-k8s-version-000272)     </interface>
	I0723 15:10:03.262837   61145 main.go:141] libmachine: (old-k8s-version-000272)     <interface type='network'>
	I0723 15:10:03.262849   61145 main.go:141] libmachine: (old-k8s-version-000272)       <source network='default'/>
	I0723 15:10:03.262861   61145 main.go:141] libmachine: (old-k8s-version-000272)       <model type='virtio'/>
	I0723 15:10:03.262870   61145 main.go:141] libmachine: (old-k8s-version-000272)     </interface>
	I0723 15:10:03.262879   61145 main.go:141] libmachine: (old-k8s-version-000272)     <serial type='pty'>
	I0723 15:10:03.262889   61145 main.go:141] libmachine: (old-k8s-version-000272)       <target port='0'/>
	I0723 15:10:03.262899   61145 main.go:141] libmachine: (old-k8s-version-000272)     </serial>
	I0723 15:10:03.262906   61145 main.go:141] libmachine: (old-k8s-version-000272)     <console type='pty'>
	I0723 15:10:03.262916   61145 main.go:141] libmachine: (old-k8s-version-000272)       <target type='serial' port='0'/>
	I0723 15:10:03.262927   61145 main.go:141] libmachine: (old-k8s-version-000272)     </console>
	I0723 15:10:03.262938   61145 main.go:141] libmachine: (old-k8s-version-000272)     <rng model='virtio'>
	I0723 15:10:03.262950   61145 main.go:141] libmachine: (old-k8s-version-000272)       <backend model='random'>/dev/random</backend>
	I0723 15:10:03.262959   61145 main.go:141] libmachine: (old-k8s-version-000272)     </rng>
	I0723 15:10:03.262966   61145 main.go:141] libmachine: (old-k8s-version-000272)     
	I0723 15:10:03.262975   61145 main.go:141] libmachine: (old-k8s-version-000272)     
	I0723 15:10:03.262983   61145 main.go:141] libmachine: (old-k8s-version-000272)   </devices>
	I0723 15:10:03.262999   61145 main.go:141] libmachine: (old-k8s-version-000272) </domain>
	I0723 15:10:03.263014   61145 main.go:141] libmachine: (old-k8s-version-000272) 
	I0723 15:10:03.269951   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:e8:a4:22 in network default
	I0723 15:10:03.270663   61145 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:10:03.270689   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:03.271417   61145 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:10:03.271684   61145 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:10:03.272170   61145 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:10:03.272783   61145 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:10:04.497745   61145 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:10:04.498732   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:04.499197   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:04.499231   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:04.499181   61404 retry.go:31] will retry after 256.08014ms: waiting for machine to come up
	I0723 15:10:04.756533   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:04.756979   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:04.757005   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:04.756933   61404 retry.go:31] will retry after 259.117958ms: waiting for machine to come up
	I0723 15:10:05.017188   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:05.017824   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:05.017853   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:05.017782   61404 retry.go:31] will retry after 452.717626ms: waiting for machine to come up
	I0723 15:10:05.472420   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:05.472864   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:05.472892   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:05.472825   61404 retry.go:31] will retry after 460.109308ms: waiting for machine to come up
	I0723 15:10:05.934510   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:05.935028   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:05.935061   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:05.934997   61404 retry.go:31] will retry after 638.200817ms: waiting for machine to come up
	I0723 15:10:06.574714   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:06.575175   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:06.575206   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:06.575110   61404 retry.go:31] will retry after 857.51075ms: waiting for machine to come up
	I0723 15:10:07.434309   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:07.434955   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:07.434989   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:07.434891   61404 retry.go:31] will retry after 797.566909ms: waiting for machine to come up
	I0723 15:10:08.234282   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:08.234890   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:08.234918   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:08.234836   61404 retry.go:31] will retry after 1.192104413s: waiting for machine to come up
	I0723 15:10:09.429296   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:09.429897   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:09.429925   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:09.429862   61404 retry.go:31] will retry after 1.377691711s: waiting for machine to come up
	I0723 15:10:10.808880   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:10.809409   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:10.809438   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:10.809355   61404 retry.go:31] will retry after 1.663742638s: waiting for machine to come up
	I0723 15:10:12.475323   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:12.475941   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:12.475972   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:12.475894   61404 retry.go:31] will retry after 2.728936234s: waiting for machine to come up
	I0723 15:10:15.207906   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:15.208408   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:15.208436   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:15.208375   61404 retry.go:31] will retry after 3.35743868s: waiting for machine to come up
	I0723 15:10:18.568024   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:18.568583   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:18.568623   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:18.568523   61404 retry.go:31] will retry after 2.740618439s: waiting for machine to come up
	I0723 15:10:21.310777   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:21.311458   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:10:21.311480   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:10:21.311438   61404 retry.go:31] will retry after 4.881514397s: waiting for machine to come up
	I0723 15:10:26.195396   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.195936   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.195963   61145 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:10:26.195971   61145 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:10:26.196363   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272
	I0723 15:10:26.270081   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:10:26.270115   61145 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:10:26.270134   61145 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:10:26.273446   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.273954   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:26.273991   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.274154   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:10:26.274178   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:10:26.274226   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:10:26.274244   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:10:26.274261   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:10:26.402609   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:10:26.402870   61145 main.go:141] libmachine: (old-k8s-version-000272) KVM machine creation complete!
	I0723 15:10:26.403223   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:10:26.403790   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:26.404026   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:26.404208   61145 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 15:10:26.404225   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:10:26.405808   61145 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 15:10:26.405824   61145 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 15:10:26.405831   61145 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 15:10:26.405840   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:26.410114   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.410726   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:26.410755   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.410958   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:26.411117   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.411273   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.411422   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:26.411589   61145 main.go:141] libmachine: Using SSH client type: native
	I0723 15:10:26.411857   61145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:10:26.411874   61145 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 15:10:26.525605   61145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:10:26.525628   61145 main.go:141] libmachine: Detecting the provisioner...
	I0723 15:10:26.525636   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:26.528597   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.529028   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:26.529064   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.529198   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:26.529436   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.529634   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.529827   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:26.530013   61145 main.go:141] libmachine: Using SSH client type: native
	I0723 15:10:26.530178   61145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:10:26.530193   61145 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 15:10:26.651517   61145 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 15:10:26.651621   61145 main.go:141] libmachine: found compatible host: buildroot
	I0723 15:10:26.651643   61145 main.go:141] libmachine: Provisioning with buildroot...
	I0723 15:10:26.651658   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:10:26.651922   61145 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:10:26.651956   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:10:26.652166   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:26.655077   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.655500   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:26.655525   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.655634   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:26.655833   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.656020   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.656183   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:26.656346   61145 main.go:141] libmachine: Using SSH client type: native
	I0723 15:10:26.656557   61145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:10:26.656572   61145 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:10:26.791952   61145 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:10:26.791987   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:26.795425   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.795834   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:26.795861   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.796085   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:26.796283   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.796443   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:26.796605   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:26.796827   61145 main.go:141] libmachine: Using SSH client type: native
	I0723 15:10:26.797074   61145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:10:26.797103   61145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:10:26.928991   61145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:10:26.929030   61145 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:10:26.929087   61145 buildroot.go:174] setting up certificates
	I0723 15:10:26.929101   61145 provision.go:84] configureAuth start
	I0723 15:10:26.929121   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:10:26.929437   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:10:26.933166   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.933581   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:26.933627   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.933823   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:26.936982   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.937481   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:26.937566   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:26.937805   61145 provision.go:143] copyHostCerts
	I0723 15:10:26.937866   61145 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:10:26.937881   61145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:10:26.937953   61145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:10:26.938100   61145 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:10:26.938110   61145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:10:26.938145   61145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:10:26.938253   61145 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:10:26.938265   61145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:10:26.938296   61145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:10:26.938401   61145 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:10:27.049008   61145 provision.go:177] copyRemoteCerts
	I0723 15:10:27.049075   61145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:10:27.049098   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:27.052233   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.052659   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.052689   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.052874   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:27.053077   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.053268   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:27.053405   61145 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:10:27.147760   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:10:27.172799   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:10:27.196252   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:10:27.224873   61145 provision.go:87] duration metric: took 295.754971ms to configureAuth
	I0723 15:10:27.224905   61145 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:10:27.225103   61145 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:10:27.225201   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:27.228098   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.228533   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.228563   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.228736   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:27.228972   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.229167   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.229338   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:27.229504   61145 main.go:141] libmachine: Using SSH client type: native
	I0723 15:10:27.229669   61145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:10:27.229683   61145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:10:27.503248   61145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:10:27.503301   61145 main.go:141] libmachine: Checking connection to Docker...
	I0723 15:10:27.503311   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetURL
	I0723 15:10:27.504599   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using libvirt version 6000000
	I0723 15:10:27.506786   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.507139   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.507165   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.507324   61145 main.go:141] libmachine: Docker is up and running!
	I0723 15:10:27.507339   61145 main.go:141] libmachine: Reticulating splines...
	I0723 15:10:27.507345   61145 client.go:171] duration metric: took 24.6115202s to LocalClient.Create
	I0723 15:10:27.507367   61145 start.go:167] duration metric: took 24.611614283s to libmachine.API.Create "old-k8s-version-000272"
	I0723 15:10:27.507377   61145 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:10:27.507387   61145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:10:27.507401   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:27.507614   61145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:10:27.507640   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:27.509822   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.510110   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.510157   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.510240   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:27.510433   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.510593   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:27.510717   61145 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:10:27.596741   61145 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:10:27.600947   61145 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:10:27.600971   61145 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:10:27.601029   61145 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:10:27.601108   61145 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:10:27.601197   61145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:10:27.610724   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:10:27.633680   61145 start.go:296] duration metric: took 126.289247ms for postStartSetup
	I0723 15:10:27.633739   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:10:27.634331   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:10:27.637329   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.637748   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.637791   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.637990   61145 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:10:27.638180   61145 start.go:128] duration metric: took 24.763180023s to createHost
	I0723 15:10:27.638204   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:27.640376   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.640679   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.640705   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.640852   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:27.641041   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.641186   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.641320   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:27.641497   61145 main.go:141] libmachine: Using SSH client type: native
	I0723 15:10:27.641779   61145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:10:27.641797   61145 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0723 15:10:27.759242   61145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721747427.719404535
	
	I0723 15:10:27.759267   61145 fix.go:216] guest clock: 1721747427.719404535
	I0723 15:10:27.759278   61145 fix.go:229] Guest: 2024-07-23 15:10:27.719404535 +0000 UTC Remote: 2024-07-23 15:10:27.638192314 +0000 UTC m=+47.452928765 (delta=81.212221ms)
	I0723 15:10:27.759308   61145 fix.go:200] guest clock delta is within tolerance: 81.212221ms
	I0723 15:10:27.759314   61145 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 24.884500576s
	I0723 15:10:27.759334   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:27.759588   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:10:27.762659   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.763182   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.763221   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.763347   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:27.763888   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:27.764103   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:10:27.764257   61145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:10:27.764323   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:27.764367   61145 ssh_runner.go:195] Run: cat /version.json
	I0723 15:10:27.764405   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:10:27.767981   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.768308   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.768337   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.768356   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.768509   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:27.768705   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.768917   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:27.768956   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:27.769025   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:10:27.769061   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:27.769186   61145 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:10:27.769672   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:10:27.769822   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:10:27.769964   61145 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:10:27.858950   61145 ssh_runner.go:195] Run: systemctl --version
	I0723 15:10:27.899027   61145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:10:28.062646   61145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:10:28.069186   61145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:10:28.069257   61145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:10:28.084346   61145 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:10:28.084377   61145 start.go:495] detecting cgroup driver to use...
	I0723 15:10:28.084451   61145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:10:28.099803   61145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:10:28.113201   61145 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:10:28.113253   61145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:10:28.126629   61145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:10:28.140106   61145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:10:28.259587   61145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:10:28.397192   61145 docker.go:233] disabling docker service ...
	I0723 15:10:28.397266   61145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:10:28.411734   61145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:10:28.424498   61145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:10:28.565776   61145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:10:28.676941   61145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:10:28.689914   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:10:28.709235   61145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:10:28.709301   61145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:10:28.719250   61145 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:10:28.719316   61145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:10:28.729925   61145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:10:28.739817   61145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:10:28.750020   61145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:10:28.760050   61145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:10:28.769140   61145 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:10:28.769204   61145 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:10:28.780710   61145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:10:28.789727   61145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:10:28.907710   61145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:10:29.069526   61145 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:10:29.069594   61145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:10:29.075194   61145 start.go:563] Will wait 60s for crictl version
	I0723 15:10:29.075256   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:29.079759   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:10:29.117624   61145 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:10:29.117713   61145 ssh_runner.go:195] Run: crio --version
	I0723 15:10:29.146900   61145 ssh_runner.go:195] Run: crio --version
	I0723 15:10:29.181963   61145 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:10:29.183268   61145 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:10:29.186302   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:29.186876   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:10:17 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:10:29.186907   61145 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:10:29.187185   61145 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:10:29.192072   61145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:10:29.204717   61145 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:10:29.204827   61145 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:10:29.204886   61145 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:10:29.238866   61145 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:10:29.238965   61145 ssh_runner.go:195] Run: which lz4
	I0723 15:10:29.243219   61145 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0723 15:10:29.247311   61145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:10:29.247351   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:10:30.656817   61145 crio.go:462] duration metric: took 1.413630732s to copy over tarball
	I0723 15:10:30.656906   61145 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:10:33.138312   61145 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.481376729s)
	I0723 15:10:33.138348   61145 crio.go:469] duration metric: took 2.481502405s to extract the tarball
	I0723 15:10:33.138356   61145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:10:33.179771   61145 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:10:33.224053   61145 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:10:33.224078   61145 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:10:33.224136   61145 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:10:33.224151   61145 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:10:33.224168   61145 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:10:33.224198   61145 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:10:33.224221   61145 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:10:33.224237   61145 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:10:33.224202   61145 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:10:33.224393   61145 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:10:33.225600   61145 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:10:33.225606   61145 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:10:33.225676   61145 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:10:33.225683   61145 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:10:33.225672   61145 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:10:33.225752   61145 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:10:33.225764   61145 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:10:33.225979   61145 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:10:33.484728   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:10:33.494363   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:10:33.506515   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:10:33.515497   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:10:33.526221   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:10:33.530787   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:10:33.531804   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:10:33.560732   61145 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:10:33.560778   61145 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:10:33.560812   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:33.597336   61145 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:10:33.597378   61145 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:10:33.597426   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:33.656196   61145 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:10:33.656238   61145 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:10:33.656290   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:33.656303   61145 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:10:33.656351   61145 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:10:33.656408   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:33.671454   61145 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:10:33.671487   61145 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:10:33.671491   61145 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:10:33.671499   61145 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:10:33.671517   61145 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:10:33.671517   61145 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:10:33.671545   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:33.671550   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:33.671551   61145 ssh_runner.go:195] Run: which crictl
	I0723 15:10:33.671568   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:10:33.671607   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:10:33.671630   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:10:33.671654   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:10:33.749652   61145 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:10:33.749723   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:10:33.749735   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:10:33.749787   61145 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:10:33.749801   61145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:10:33.755999   61145 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:10:33.756075   61145 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:10:33.826569   61145 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:10:33.826617   61145 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:10:33.826658   61145 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:10:34.201757   61145 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:10:34.342065   61145 cache_images.go:92] duration metric: took 1.117972044s to LoadCachedImages
	W0723 15:10:34.342156   61145 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0723 15:10:34.342170   61145 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:10:34.342302   61145 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:10:34.342403   61145 ssh_runner.go:195] Run: crio config
	I0723 15:10:34.395196   61145 cni.go:84] Creating CNI manager for ""
	I0723 15:10:34.395218   61145 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:10:34.395230   61145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:10:34.395247   61145 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:10:34.395369   61145 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:10:34.395427   61145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:10:34.405241   61145 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:10:34.405319   61145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:10:34.415011   61145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:10:34.430524   61145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:10:34.445887   61145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:10:34.461971   61145 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:10:34.465470   61145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:10:34.476915   61145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:10:34.609342   61145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:10:34.625056   61145 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:10:34.625087   61145 certs.go:194] generating shared ca certs ...
	I0723 15:10:34.625104   61145 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:10:34.625294   61145 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:10:34.625375   61145 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:10:34.625392   61145 certs.go:256] generating profile certs ...
	I0723 15:10:34.625459   61145 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:10:34.625481   61145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt with IP's: []
	I0723 15:10:34.699958   61145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt ...
	I0723 15:10:34.699994   61145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: {Name:mka8c750715c67ebef618d5588712c67401d26a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:10:34.700189   61145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key ...
	I0723 15:10:34.700206   61145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key: {Name:mk682025ce1b19888be5c12fcc2fd30240d2ff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:10:34.700333   61145 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:10:34.700358   61145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt.2c7d9ab3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.51]
	I0723 15:10:34.812154   61145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt.2c7d9ab3 ...
	I0723 15:10:34.812182   61145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt.2c7d9ab3: {Name:mk1254cb017d09383da5ef370629d42bdf6d88ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:10:34.812368   61145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3 ...
	I0723 15:10:34.812388   61145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3: {Name:mkfcb32affd2bac243be7acf32499df9cb786f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:10:34.812479   61145 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt.2c7d9ab3 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt
	I0723 15:10:34.812563   61145 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key
	I0723 15:10:34.812624   61145 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:10:34.812663   61145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt with IP's: []
	I0723 15:10:34.968992   61145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt ...
	I0723 15:10:34.969019   61145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt: {Name:mk7c499cf2d149f4efc9d085464f7cce66885ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:10:34.969189   61145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key ...
	I0723 15:10:34.969208   61145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key: {Name:mkb9b32fbe9e0f68e7d471fbb5c262566e3a92dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:10:34.969450   61145 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:10:34.969492   61145 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:10:34.969500   61145 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:10:34.969540   61145 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:10:34.969572   61145 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:10:34.969605   61145 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:10:34.969662   61145 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:10:34.970273   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:10:34.994940   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:10:35.016898   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:10:35.038793   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:10:35.060364   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:10:35.082065   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:10:35.104784   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:10:35.127683   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:10:35.150417   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:10:35.173379   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:10:35.196144   61145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:10:35.219013   61145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:10:35.234575   61145 ssh_runner.go:195] Run: openssl version
	I0723 15:10:35.239886   61145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:10:35.249998   61145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:10:35.254127   61145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:10:35.254189   61145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:10:35.259823   61145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:10:35.270300   61145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:10:35.280921   61145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:10:35.285249   61145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:10:35.285309   61145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:10:35.290681   61145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:10:35.301013   61145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:10:35.311662   61145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:10:35.316014   61145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:10:35.316067   61145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:10:35.321697   61145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:10:35.332062   61145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:10:35.335824   61145 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 15:10:35.335879   61145 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:10:35.335980   61145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:10:35.336031   61145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:10:35.372538   61145 cri.go:89] found id: ""
	I0723 15:10:35.372623   61145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:10:35.382303   61145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:10:35.391547   61145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:10:35.400992   61145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:10:35.401015   61145 kubeadm.go:157] found existing configuration files:
	
	I0723 15:10:35.401061   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:10:35.412310   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:10:35.412371   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:10:35.421650   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:10:35.438246   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:10:35.438328   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:10:35.449289   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:10:35.459793   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:10:35.459850   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:10:35.471471   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:10:35.483265   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:10:35.483317   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:10:35.497890   61145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:10:35.771742   61145 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:12:33.782566   61145 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:12:33.782779   61145 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:12:33.783898   61145 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:12:33.783999   61145 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:12:33.784179   61145 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:12:33.784396   61145 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:12:33.784673   61145 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:12:33.784930   61145 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:12:33.786874   61145 out.go:204]   - Generating certificates and keys ...
	I0723 15:12:33.786970   61145 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:12:33.787050   61145 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:12:33.787109   61145 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 15:12:33.787182   61145 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 15:12:33.787256   61145 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 15:12:33.787305   61145 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 15:12:33.787371   61145 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 15:12:33.787501   61145 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-000272] and IPs [192.168.50.51 127.0.0.1 ::1]
	I0723 15:12:33.787599   61145 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 15:12:33.787743   61145 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-000272] and IPs [192.168.50.51 127.0.0.1 ::1]
	I0723 15:12:33.787834   61145 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 15:12:33.787912   61145 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 15:12:33.787950   61145 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 15:12:33.787998   61145 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:12:33.788037   61145 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:12:33.788082   61145 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:12:33.788131   61145 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:12:33.788173   61145 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:12:33.788255   61145 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:12:33.788331   61145 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:12:33.788362   61145 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:12:33.788414   61145 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:12:33.790124   61145 out.go:204]   - Booting up control plane ...
	I0723 15:12:33.790213   61145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:12:33.790279   61145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:12:33.790342   61145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:12:33.790425   61145 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:12:33.790545   61145 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:12:33.790591   61145 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:12:33.790648   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:12:33.790797   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:12:33.790851   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:12:33.791065   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:12:33.791131   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:12:33.791285   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:12:33.791350   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:12:33.791538   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:12:33.791620   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:12:33.791832   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:12:33.791843   61145 kubeadm.go:310] 
	I0723 15:12:33.791896   61145 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:12:33.791982   61145 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:12:33.791999   61145 kubeadm.go:310] 
	I0723 15:12:33.792051   61145 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:12:33.792090   61145 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:12:33.792219   61145 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:12:33.792231   61145 kubeadm.go:310] 
	I0723 15:12:33.792360   61145 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:12:33.792397   61145 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:12:33.792425   61145 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:12:33.792431   61145 kubeadm.go:310] 
	I0723 15:12:33.792585   61145 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:12:33.792678   61145 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:12:33.792694   61145 kubeadm.go:310] 
	I0723 15:12:33.792820   61145 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:12:33.792940   61145 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:12:33.793036   61145 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:12:33.793134   61145 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:12:33.793187   61145 kubeadm.go:310] 
	W0723 15:12:33.793258   61145 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-000272] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-000272] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-000272] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-000272] and IPs [192.168.50.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:12:33.793311   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:12:34.284392   61145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:12:34.298350   61145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:12:34.307570   61145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:12:34.307593   61145 kubeadm.go:157] found existing configuration files:
	
	I0723 15:12:34.307634   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:12:34.316580   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:12:34.316649   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:12:34.325775   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:12:34.334638   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:12:34.334700   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:12:34.343659   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:12:34.351993   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:12:34.352051   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:12:34.360837   61145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:12:34.369182   61145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:12:34.369235   61145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:12:34.377889   61145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:12:34.443277   61145 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:12:34.443349   61145 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:12:34.585423   61145 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:12:34.585523   61145 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:12:34.585614   61145 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:12:34.758272   61145 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:12:34.760811   61145 out.go:204]   - Generating certificates and keys ...
	I0723 15:12:34.760915   61145 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:12:34.761002   61145 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:12:34.761113   61145 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:12:34.761205   61145 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:12:34.761316   61145 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:12:34.761391   61145 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:12:34.761447   61145 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:12:34.761503   61145 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:12:34.761596   61145 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:12:34.761718   61145 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:12:34.761784   61145 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:12:34.761864   61145 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:12:34.878098   61145 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:12:34.967030   61145 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:12:35.295067   61145 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:12:35.413898   61145 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:12:35.429452   61145 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:12:35.430651   61145 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:12:35.430771   61145 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:12:35.569029   61145 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:12:35.570664   61145 out.go:204]   - Booting up control plane ...
	I0723 15:12:35.570796   61145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:12:35.576642   61145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:12:35.577955   61145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:12:35.579387   61145 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:12:35.582369   61145 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:13:15.584418   61145 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:13:15.584899   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:15.585145   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:20.585685   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:20.585931   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:30.586788   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:30.587080   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:13:50.588334   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:13:50.588540   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:14:30.587817   61145 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:14:30.588089   61145 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:14:30.588115   61145 kubeadm.go:310] 
	I0723 15:14:30.588169   61145 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:14:30.588227   61145 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:14:30.588237   61145 kubeadm.go:310] 
	I0723 15:14:30.588278   61145 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:14:30.588331   61145 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:14:30.588483   61145 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:14:30.588496   61145 kubeadm.go:310] 
	I0723 15:14:30.588644   61145 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:14:30.588705   61145 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:14:30.588750   61145 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:14:30.588761   61145 kubeadm.go:310] 
	I0723 15:14:30.588891   61145 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:14:30.589002   61145 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:14:30.589015   61145 kubeadm.go:310] 
	I0723 15:14:30.589170   61145 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:14:30.589428   61145 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:14:30.589583   61145 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:14:30.589691   61145 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:14:30.589702   61145 kubeadm.go:310] 
	I0723 15:14:30.590661   61145 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:14:30.590791   61145 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:14:30.590890   61145 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:14:30.590961   61145 kubeadm.go:394] duration metric: took 3m55.255086134s to StartCluster
	I0723 15:14:30.591024   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:14:30.591087   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:14:30.640630   61145 cri.go:89] found id: ""
	I0723 15:14:30.640658   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.640669   61145 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:14:30.640676   61145 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:14:30.640732   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:14:30.678920   61145 cri.go:89] found id: ""
	I0723 15:14:30.678946   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.678954   61145 logs.go:278] No container was found matching "etcd"
	I0723 15:14:30.678962   61145 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:14:30.679023   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:14:30.717609   61145 cri.go:89] found id: ""
	I0723 15:14:30.717633   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.717642   61145 logs.go:278] No container was found matching "coredns"
	I0723 15:14:30.717649   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:14:30.717700   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:14:30.775955   61145 cri.go:89] found id: ""
	I0723 15:14:30.775986   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.775995   61145 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:14:30.776003   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:14:30.776069   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:14:30.810116   61145 cri.go:89] found id: ""
	I0723 15:14:30.810144   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.810155   61145 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:14:30.810163   61145 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:14:30.810224   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:14:30.844174   61145 cri.go:89] found id: ""
	I0723 15:14:30.844203   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.844214   61145 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:14:30.844222   61145 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:14:30.844284   61145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:14:30.880638   61145 cri.go:89] found id: ""
	I0723 15:14:30.880671   61145 logs.go:276] 0 containers: []
	W0723 15:14:30.880681   61145 logs.go:278] No container was found matching "kindnet"
	I0723 15:14:30.880693   61145 logs.go:123] Gathering logs for dmesg ...
	I0723 15:14:30.880709   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:14:30.895113   61145 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:14:30.895140   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:14:31.031289   61145 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:14:31.031311   61145 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:14:31.031325   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:14:31.131316   61145 logs.go:123] Gathering logs for container status ...
	I0723 15:14:31.131352   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:14:31.181186   61145 logs.go:123] Gathering logs for kubelet ...
	I0723 15:14:31.181215   61145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 15:14:31.230540   61145 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:14:31.230594   61145 out.go:239] * 
	* 
	W0723 15:14:31.230658   61145 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:14:31.230686   61145 out.go:239] * 
	* 
	W0723 15:14:31.231504   61145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:14:31.234028   61145 out.go:177] 
	W0723 15:14:31.235075   61145 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:14:31.235116   61145 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:14:31.235141   61145 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:14:31.236539   61145 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-000272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 6 (239.241867ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:14:31.508559   64063 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-000272" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (291.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-543029 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-543029 --alsologtostderr -v=3: exit status 82 (2m0.4820343s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-543029"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:12:44.083894   63205 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:12:44.084316   63205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:12:44.084333   63205 out.go:304] Setting ErrFile to fd 2...
	I0723 15:12:44.084340   63205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:12:44.084813   63205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:12:44.085349   63205 out.go:298] Setting JSON to false
	I0723 15:12:44.085423   63205 mustload.go:65] Loading cluster: no-preload-543029
	I0723 15:12:44.085733   63205 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:12:44.085803   63205 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/config.json ...
	I0723 15:12:44.085963   63205 mustload.go:65] Loading cluster: no-preload-543029
	I0723 15:12:44.086057   63205 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:12:44.086083   63205 stop.go:39] StopHost: no-preload-543029
	I0723 15:12:44.086443   63205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:12:44.086483   63205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:12:44.101464   63205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0723 15:12:44.101951   63205 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:12:44.102505   63205 main.go:141] libmachine: Using API Version  1
	I0723 15:12:44.102523   63205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:12:44.102888   63205 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:12:44.106061   63205 out.go:177] * Stopping node "no-preload-543029"  ...
	I0723 15:12:44.107415   63205 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0723 15:12:44.107449   63205 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:12:44.107662   63205 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0723 15:12:44.107678   63205 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:12:44.110707   63205 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:12:44.111106   63205 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:11:20 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:12:44.111135   63205 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:12:44.111265   63205 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:12:44.111445   63205 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:12:44.111590   63205 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:12:44.111726   63205 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:12:44.207011   63205 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0723 15:12:44.266182   63205 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0723 15:12:44.307766   63205 main.go:141] libmachine: Stopping "no-preload-543029"...
	I0723 15:12:44.307799   63205 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:12:44.309583   63205 main.go:141] libmachine: (no-preload-543029) Calling .Stop
	I0723 15:12:44.313304   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 0/120
	I0723 15:12:45.315118   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 1/120
	I0723 15:12:46.316729   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 2/120
	I0723 15:12:47.318623   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 3/120
	I0723 15:12:48.321181   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 4/120
	I0723 15:12:49.323256   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 5/120
	I0723 15:12:50.324833   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 6/120
	I0723 15:12:51.326351   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 7/120
	I0723 15:12:52.328443   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 8/120
	I0723 15:12:53.329930   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 9/120
	I0723 15:12:54.332156   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 10/120
	I0723 15:12:55.333589   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 11/120
	I0723 15:12:56.334983   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 12/120
	I0723 15:12:57.336420   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 13/120
	I0723 15:12:58.337845   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 14/120
	I0723 15:12:59.339742   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 15/120
	I0723 15:13:00.341695   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 16/120
	I0723 15:13:01.343214   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 17/120
	I0723 15:13:02.344653   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 18/120
	I0723 15:13:03.345965   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 19/120
	I0723 15:13:04.347273   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 20/120
	I0723 15:13:05.348738   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 21/120
	I0723 15:13:06.350263   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 22/120
	I0723 15:13:07.351435   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 23/120
	I0723 15:13:08.352975   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 24/120
	I0723 15:13:09.355058   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 25/120
	I0723 15:13:10.356874   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 26/120
	I0723 15:13:11.358304   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 27/120
	I0723 15:13:12.359825   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 28/120
	I0723 15:13:13.361798   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 29/120
	I0723 15:13:14.363399   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 30/120
	I0723 15:13:15.364934   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 31/120
	I0723 15:13:16.366593   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 32/120
	I0723 15:13:17.368034   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 33/120
	I0723 15:13:18.369571   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 34/120
	I0723 15:13:19.371551   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 35/120
	I0723 15:13:20.373009   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 36/120
	I0723 15:13:21.374368   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 37/120
	I0723 15:13:22.376537   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 38/120
	I0723 15:13:23.378116   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 39/120
	I0723 15:13:24.380368   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 40/120
	I0723 15:13:25.382054   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 41/120
	I0723 15:13:26.383533   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 42/120
	I0723 15:13:27.384958   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 43/120
	I0723 15:13:28.386681   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 44/120
	I0723 15:13:29.388685   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 45/120
	I0723 15:13:30.390180   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 46/120
	I0723 15:13:31.391669   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 47/120
	I0723 15:13:32.393092   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 48/120
	I0723 15:13:33.394854   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 49/120
	I0723 15:13:34.397094   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 50/120
	I0723 15:13:35.398662   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 51/120
	I0723 15:13:36.400314   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 52/120
	I0723 15:13:37.401848   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 53/120
	I0723 15:13:38.403422   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 54/120
	I0723 15:13:39.405692   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 55/120
	I0723 15:13:40.407308   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 56/120
	I0723 15:13:41.409014   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 57/120
	I0723 15:13:42.410461   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 58/120
	I0723 15:13:43.412010   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 59/120
	I0723 15:13:44.413111   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 60/120
	I0723 15:13:45.414915   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 61/120
	I0723 15:13:46.416418   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 62/120
	I0723 15:13:47.417887   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 63/120
	I0723 15:13:48.419497   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 64/120
	I0723 15:13:49.421583   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 65/120
	I0723 15:13:50.422771   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 66/120
	I0723 15:13:51.425059   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 67/120
	I0723 15:13:52.426729   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 68/120
	I0723 15:13:53.428769   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 69/120
	I0723 15:13:54.431380   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 70/120
	I0723 15:13:55.433067   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 71/120
	I0723 15:13:56.434371   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 72/120
	I0723 15:13:57.436219   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 73/120
	I0723 15:13:58.437864   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 74/120
	I0723 15:13:59.440194   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 75/120
	I0723 15:14:00.441677   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 76/120
	I0723 15:14:01.443143   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 77/120
	I0723 15:14:02.445007   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 78/120
	I0723 15:14:03.446557   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 79/120
	I0723 15:14:04.448941   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 80/120
	I0723 15:14:05.450450   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 81/120
	I0723 15:14:06.452072   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 82/120
	I0723 15:14:07.453712   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 83/120
	I0723 15:14:08.456164   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 84/120
	I0723 15:14:09.458660   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 85/120
	I0723 15:14:10.460739   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 86/120
	I0723 15:14:11.462229   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 87/120
	I0723 15:14:12.464340   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 88/120
	I0723 15:14:13.465657   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 89/120
	I0723 15:14:14.467710   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 90/120
	I0723 15:14:15.469185   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 91/120
	I0723 15:14:16.470828   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 92/120
	I0723 15:14:17.472259   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 93/120
	I0723 15:14:18.473583   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 94/120
	I0723 15:14:19.475741   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 95/120
	I0723 15:14:20.477445   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 96/120
	I0723 15:14:21.478779   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 97/120
	I0723 15:14:22.481011   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 98/120
	I0723 15:14:23.482394   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 99/120
	I0723 15:14:24.484751   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 100/120
	I0723 15:14:25.486199   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 101/120
	I0723 15:14:26.487632   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 102/120
	I0723 15:14:27.488934   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 103/120
	I0723 15:14:28.490547   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 104/120
	I0723 15:14:29.492092   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 105/120
	I0723 15:14:30.494214   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 106/120
	I0723 15:14:31.496299   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 107/120
	I0723 15:14:32.497627   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 108/120
	I0723 15:14:33.499475   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 109/120
	I0723 15:14:34.501686   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 110/120
	I0723 15:14:35.503141   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 111/120
	I0723 15:14:36.505153   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 112/120
	I0723 15:14:37.506823   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 113/120
	I0723 15:14:38.508331   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 114/120
	I0723 15:14:39.510461   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 115/120
	I0723 15:14:40.512230   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 116/120
	I0723 15:14:41.513694   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 117/120
	I0723 15:14:42.515293   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 118/120
	I0723 15:14:43.516949   63205 main.go:141] libmachine: (no-preload-543029) Waiting for machine to stop 119/120
	I0723 15:14:44.517886   63205 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0723 15:14:44.517956   63205 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0723 15:14:44.520480   63205 out.go:177] 
	W0723 15:14:44.521920   63205 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0723 15:14:44.521939   63205 out.go:239] * 
	* 
	W0723 15:14:44.524487   63205 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:14:44.525929   63205 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-543029 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029
E0723 15:14:49.700220   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029: exit status 3 (18.44887519s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:15:02.974609   64224 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	E0723 15:15:02.974626   64224 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-543029" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-486436 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-486436 --alsologtostderr -v=3: exit status 82 (2m0.533207091s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-486436"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:13:16.625905   63451 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:13:16.626054   63451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:16.626065   63451 out.go:304] Setting ErrFile to fd 2...
	I0723 15:13:16.626071   63451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:16.626257   63451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:13:16.626521   63451 out.go:298] Setting JSON to false
	I0723 15:13:16.626613   63451 mustload.go:65] Loading cluster: embed-certs-486436
	I0723 15:13:16.626949   63451 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:13:16.627034   63451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/config.json ...
	I0723 15:13:16.627211   63451 mustload.go:65] Loading cluster: embed-certs-486436
	I0723 15:13:16.627343   63451 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:13:16.627387   63451 stop.go:39] StopHost: embed-certs-486436
	I0723 15:13:16.627833   63451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:13:16.627885   63451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:13:16.643580   63451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0723 15:13:16.644102   63451 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:13:16.644763   63451 main.go:141] libmachine: Using API Version  1
	I0723 15:13:16.644790   63451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:13:16.645211   63451 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:13:16.647955   63451 out.go:177] * Stopping node "embed-certs-486436"  ...
	I0723 15:13:16.649393   63451 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0723 15:13:16.649438   63451 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:13:16.649723   63451 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0723 15:13:16.649749   63451 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:13:16.652740   63451 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:13:16.653188   63451 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:11:42 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:13:16.653245   63451 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:13:16.653402   63451 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:13:16.653619   63451 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:13:16.653794   63451 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:13:16.654009   63451 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:13:16.766503   63451 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0723 15:13:16.838825   63451 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0723 15:13:16.900383   63451 main.go:141] libmachine: Stopping "embed-certs-486436"...
	I0723 15:13:16.900419   63451 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:13:16.902412   63451 main.go:141] libmachine: (embed-certs-486436) Calling .Stop
	I0723 15:13:16.906072   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 0/120
	I0723 15:13:17.907444   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 1/120
	I0723 15:13:18.908814   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 2/120
	I0723 15:13:19.910317   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 3/120
	I0723 15:13:20.911892   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 4/120
	I0723 15:13:21.914089   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 5/120
	I0723 15:13:22.915716   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 6/120
	I0723 15:13:23.917339   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 7/120
	I0723 15:13:24.918878   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 8/120
	I0723 15:13:25.920658   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 9/120
	I0723 15:13:26.922220   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 10/120
	I0723 15:13:27.923960   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 11/120
	I0723 15:13:28.925519   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 12/120
	I0723 15:13:29.927048   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 13/120
	I0723 15:13:30.928637   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 14/120
	I0723 15:13:31.930836   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 15/120
	I0723 15:13:32.932500   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 16/120
	I0723 15:13:33.934157   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 17/120
	I0723 15:13:34.935824   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 18/120
	I0723 15:13:35.937287   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 19/120
	I0723 15:13:36.938621   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 20/120
	I0723 15:13:37.940156   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 21/120
	I0723 15:13:38.941684   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 22/120
	I0723 15:13:39.943279   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 23/120
	I0723 15:13:40.944755   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 24/120
	I0723 15:13:41.946805   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 25/120
	I0723 15:13:42.948700   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 26/120
	I0723 15:13:43.950844   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 27/120
	I0723 15:13:44.952269   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 28/120
	I0723 15:13:45.953872   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 29/120
	I0723 15:13:46.956004   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 30/120
	I0723 15:13:47.957408   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 31/120
	I0723 15:13:48.958795   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 32/120
	I0723 15:13:49.960318   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 33/120
	I0723 15:13:50.961725   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 34/120
	I0723 15:13:51.964074   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 35/120
	I0723 15:13:52.965469   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 36/120
	I0723 15:13:53.967001   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 37/120
	I0723 15:13:54.968981   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 38/120
	I0723 15:13:55.970674   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 39/120
	I0723 15:13:56.973015   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 40/120
	I0723 15:13:57.974445   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 41/120
	I0723 15:13:58.976178   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 42/120
	I0723 15:13:59.977960   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 43/120
	I0723 15:14:00.980298   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 44/120
	I0723 15:14:01.982458   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 45/120
	I0723 15:14:02.983802   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 46/120
	I0723 15:14:03.985276   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 47/120
	I0723 15:14:04.986790   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 48/120
	I0723 15:14:05.988428   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 49/120
	I0723 15:14:06.990652   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 50/120
	I0723 15:14:07.992880   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 51/120
	I0723 15:14:08.994191   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 52/120
	I0723 15:14:09.995903   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 53/120
	I0723 15:14:10.997520   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 54/120
	I0723 15:14:11.999973   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 55/120
	I0723 15:14:13.001861   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 56/120
	I0723 15:14:14.003151   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 57/120
	I0723 15:14:15.004777   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 58/120
	I0723 15:14:16.006247   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 59/120
	I0723 15:14:17.007793   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 60/120
	I0723 15:14:18.009095   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 61/120
	I0723 15:14:19.010531   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 62/120
	I0723 15:14:20.012740   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 63/120
	I0723 15:14:21.014277   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 64/120
	I0723 15:14:22.016080   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 65/120
	I0723 15:14:23.017845   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 66/120
	I0723 15:14:24.019296   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 67/120
	I0723 15:14:25.020820   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 68/120
	I0723 15:14:26.022541   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 69/120
	I0723 15:14:27.024678   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 70/120
	I0723 15:14:28.027901   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 71/120
	I0723 15:14:29.029494   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 72/120
	I0723 15:14:30.031064   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 73/120
	I0723 15:14:31.033182   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 74/120
	I0723 15:14:32.035227   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 75/120
	I0723 15:14:33.036654   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 76/120
	I0723 15:14:34.038372   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 77/120
	I0723 15:14:35.039828   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 78/120
	I0723 15:14:36.041495   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 79/120
	I0723 15:14:37.043877   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 80/120
	I0723 15:14:38.045406   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 81/120
	I0723 15:14:39.047059   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 82/120
	I0723 15:14:40.048616   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 83/120
	I0723 15:14:41.050561   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 84/120
	I0723 15:14:42.052349   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 85/120
	I0723 15:14:43.053746   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 86/120
	I0723 15:14:44.055269   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 87/120
	I0723 15:14:45.056907   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 88/120
	I0723 15:14:46.058511   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 89/120
	I0723 15:14:47.059864   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 90/120
	I0723 15:14:48.061588   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 91/120
	I0723 15:14:49.063076   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 92/120
	I0723 15:14:50.064610   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 93/120
	I0723 15:14:51.065856   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 94/120
	I0723 15:14:52.067901   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 95/120
	I0723 15:14:53.069225   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 96/120
	I0723 15:14:54.070473   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 97/120
	I0723 15:14:55.071977   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 98/120
	I0723 15:14:56.073403   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 99/120
	I0723 15:14:57.075624   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 100/120
	I0723 15:14:58.076889   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 101/120
	I0723 15:14:59.078514   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 102/120
	I0723 15:15:00.079865   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 103/120
	I0723 15:15:01.081288   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 104/120
	I0723 15:15:02.083198   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 105/120
	I0723 15:15:03.085053   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 106/120
	I0723 15:15:04.086696   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 107/120
	I0723 15:15:05.089079   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 108/120
	I0723 15:15:06.090274   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 109/120
	I0723 15:15:07.092499   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 110/120
	I0723 15:15:08.094067   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 111/120
	I0723 15:15:09.095548   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 112/120
	I0723 15:15:10.097010   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 113/120
	I0723 15:15:11.098451   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 114/120
	I0723 15:15:12.100532   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 115/120
	I0723 15:15:13.101881   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 116/120
	I0723 15:15:14.104646   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 117/120
	I0723 15:15:15.106673   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 118/120
	I0723 15:15:16.108965   63451 main.go:141] libmachine: (embed-certs-486436) Waiting for machine to stop 119/120
	I0723 15:15:17.110078   63451 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0723 15:15:17.110152   63451 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0723 15:15:17.111991   63451 out.go:177] 
	W0723 15:15:17.113371   63451 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0723 15:15:17.113404   63451 out.go:239] * 
	* 
	W0723 15:15:17.115958   63451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:15:17.117312   63451 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-486436 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436: exit status 3 (18.617454747s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:15:35.738807   64892 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0723 15:15:35.738830   64892 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-486436" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-000272 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-000272 create -f testdata/busybox.yaml: exit status 1 (43.329036ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-000272" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-000272 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 6 (242.699743ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:14:31.796699   64103 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-000272" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 6 (219.445432ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:14:32.016761   64132 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-000272" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-000272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-000272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.801217254s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-000272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-000272 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-000272 describe deploy/metrics-server -n kube-system: exit status 1 (44.879241ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-000272" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-000272 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 6 (219.323786ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:16:23.081255   65475 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-000272" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029: exit status 3 (3.162735448s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:15:06.138679   64443 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	E0723 15:15:06.138698   64443 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-543029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-543029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154667556s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-543029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029: exit status 3 (3.062523865s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:15:15.354826   64808 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	E0723 15:15:15.354853   64808 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-543029" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436: exit status 3 (3.16921666s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:15:38.906774   65047 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0723 15:15:38.906802   65047 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-486436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-486436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153246674s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-486436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436: exit status 3 (3.06102871s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:15:48.122709   65111 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0723 15:15:48.122725   65111 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-486436" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-911217 --alsologtostderr -v=3
E0723 15:16:12.748609   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-911217 --alsologtostderr -v=3: exit status 82 (2m0.503877993s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-911217"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:16:10.643484   65397 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:16:10.643618   65397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:16:10.643629   65397 out.go:304] Setting ErrFile to fd 2...
	I0723 15:16:10.643637   65397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:16:10.643812   65397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:16:10.644039   65397 out.go:298] Setting JSON to false
	I0723 15:16:10.644115   65397 mustload.go:65] Loading cluster: default-k8s-diff-port-911217
	I0723 15:16:10.644464   65397 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:16:10.644535   65397 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:16:10.644765   65397 mustload.go:65] Loading cluster: default-k8s-diff-port-911217
	I0723 15:16:10.644911   65397 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:16:10.644968   65397 stop.go:39] StopHost: default-k8s-diff-port-911217
	I0723 15:16:10.645409   65397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:16:10.645444   65397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:16:10.660214   65397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0723 15:16:10.660714   65397 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:16:10.661285   65397 main.go:141] libmachine: Using API Version  1
	I0723 15:16:10.661313   65397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:16:10.661809   65397 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:16:10.664559   65397 out.go:177] * Stopping node "default-k8s-diff-port-911217"  ...
	I0723 15:16:10.665904   65397 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0723 15:16:10.665951   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:16:10.666221   65397 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0723 15:16:10.666252   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:16:10.669242   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:16:10.669644   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:15:18 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:16:10.669680   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:16:10.669821   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:16:10.670004   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:16:10.670167   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:16:10.670336   65397 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:16:10.762819   65397 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0723 15:16:10.823687   65397 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0723 15:16:10.890664   65397 main.go:141] libmachine: Stopping "default-k8s-diff-port-911217"...
	I0723 15:16:10.890687   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:16:10.892409   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Stop
	I0723 15:16:10.896040   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 0/120
	I0723 15:16:11.897542   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 1/120
	I0723 15:16:12.898932   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 2/120
	I0723 15:16:13.900449   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 3/120
	I0723 15:16:14.902074   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 4/120
	I0723 15:16:15.904163   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 5/120
	I0723 15:16:16.905537   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 6/120
	I0723 15:16:17.906864   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 7/120
	I0723 15:16:18.908585   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 8/120
	I0723 15:16:19.910061   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 9/120
	I0723 15:16:20.911539   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 10/120
	I0723 15:16:21.913200   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 11/120
	I0723 15:16:22.914766   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 12/120
	I0723 15:16:23.916562   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 13/120
	I0723 15:16:24.918306   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 14/120
	I0723 15:16:25.920548   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 15/120
	I0723 15:16:26.921924   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 16/120
	I0723 15:16:27.923716   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 17/120
	I0723 15:16:28.925194   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 18/120
	I0723 15:16:29.926957   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 19/120
	I0723 15:16:30.929222   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 20/120
	I0723 15:16:31.930655   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 21/120
	I0723 15:16:32.932223   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 22/120
	I0723 15:16:33.933582   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 23/120
	I0723 15:16:34.935176   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 24/120
	I0723 15:16:35.936843   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 25/120
	I0723 15:16:36.938408   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 26/120
	I0723 15:16:37.939663   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 27/120
	I0723 15:16:38.941157   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 28/120
	I0723 15:16:39.942470   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 29/120
	I0723 15:16:40.944662   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 30/120
	I0723 15:16:41.946147   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 31/120
	I0723 15:16:42.947817   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 32/120
	I0723 15:16:43.949363   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 33/120
	I0723 15:16:44.950931   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 34/120
	I0723 15:16:45.953127   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 35/120
	I0723 15:16:46.954904   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 36/120
	I0723 15:16:47.956449   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 37/120
	I0723 15:16:48.957815   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 38/120
	I0723 15:16:49.959525   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 39/120
	I0723 15:16:50.961910   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 40/120
	I0723 15:16:51.963850   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 41/120
	I0723 15:16:52.965175   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 42/120
	I0723 15:16:53.966641   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 43/120
	I0723 15:16:54.968164   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 44/120
	I0723 15:16:55.970587   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 45/120
	I0723 15:16:56.972006   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 46/120
	I0723 15:16:57.973460   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 47/120
	I0723 15:16:58.974902   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 48/120
	I0723 15:16:59.976341   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 49/120
	I0723 15:17:00.978517   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 50/120
	I0723 15:17:01.979879   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 51/120
	I0723 15:17:02.981273   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 52/120
	I0723 15:17:03.982672   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 53/120
	I0723 15:17:04.984327   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 54/120
	I0723 15:17:05.986580   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 55/120
	I0723 15:17:06.987825   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 56/120
	I0723 15:17:07.989269   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 57/120
	I0723 15:17:08.990844   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 58/120
	I0723 15:17:09.992393   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 59/120
	I0723 15:17:10.994933   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 60/120
	I0723 15:17:11.996401   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 61/120
	I0723 15:17:12.998094   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 62/120
	I0723 15:17:13.999808   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 63/120
	I0723 15:17:15.001261   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 64/120
	I0723 15:17:16.003367   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 65/120
	I0723 15:17:17.004870   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 66/120
	I0723 15:17:18.006347   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 67/120
	I0723 15:17:19.007763   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 68/120
	I0723 15:17:20.009223   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 69/120
	I0723 15:17:21.011739   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 70/120
	I0723 15:17:22.013607   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 71/120
	I0723 15:17:23.015139   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 72/120
	I0723 15:17:24.016619   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 73/120
	I0723 15:17:25.018280   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 74/120
	I0723 15:17:26.020282   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 75/120
	I0723 15:17:27.021971   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 76/120
	I0723 15:17:28.023622   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 77/120
	I0723 15:17:29.025354   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 78/120
	I0723 15:17:30.026947   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 79/120
	I0723 15:17:31.029488   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 80/120
	I0723 15:17:32.031086   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 81/120
	I0723 15:17:33.032568   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 82/120
	I0723 15:17:34.033972   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 83/120
	I0723 15:17:35.035576   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 84/120
	I0723 15:17:36.038000   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 85/120
	I0723 15:17:37.039547   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 86/120
	I0723 15:17:38.041208   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 87/120
	I0723 15:17:39.043129   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 88/120
	I0723 15:17:40.044937   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 89/120
	I0723 15:17:41.047668   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 90/120
	I0723 15:17:42.049237   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 91/120
	I0723 15:17:43.050835   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 92/120
	I0723 15:17:44.052254   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 93/120
	I0723 15:17:45.053556   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 94/120
	I0723 15:17:46.055609   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 95/120
	I0723 15:17:47.056964   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 96/120
	I0723 15:17:48.058407   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 97/120
	I0723 15:17:49.059920   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 98/120
	I0723 15:17:50.061554   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 99/120
	I0723 15:17:51.063902   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 100/120
	I0723 15:17:52.065418   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 101/120
	I0723 15:17:53.066926   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 102/120
	I0723 15:17:54.068544   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 103/120
	I0723 15:17:55.070003   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 104/120
	I0723 15:17:56.071988   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 105/120
	I0723 15:17:57.073525   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 106/120
	I0723 15:17:58.074954   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 107/120
	I0723 15:17:59.076529   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 108/120
	I0723 15:18:00.077923   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 109/120
	I0723 15:18:01.080364   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 110/120
	I0723 15:18:02.082058   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 111/120
	I0723 15:18:03.083499   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 112/120
	I0723 15:18:04.085181   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 113/120
	I0723 15:18:05.086699   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 114/120
	I0723 15:18:06.088804   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 115/120
	I0723 15:18:07.090305   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 116/120
	I0723 15:18:08.091727   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 117/120
	I0723 15:18:09.093329   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 118/120
	I0723 15:18:10.094929   65397 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for machine to stop 119/120
	I0723 15:18:11.096409   65397 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0723 15:18:11.096494   65397 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0723 15:18:11.098918   65397 out.go:177] 
	W0723 15:18:11.100425   65397 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0723 15:18:11.100444   65397 out.go:239] * 
	* 
	W0723 15:18:11.103095   65397 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:18:11.105420   65397 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-911217 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217: exit status 3 (18.45555019s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:18:29.562843   66435 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.64:22: connect: no route to host
	E0723 15:18:29.562875   66435 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.64:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911217" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (749.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-000272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0723 15:17:11.818984   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-000272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m26.190098467s)

                                                
                                                
-- stdout --
	* [old-k8s-version-000272] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-000272" primary control-plane node in "old-k8s-version-000272" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:16:26.846785   65605 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:16:26.847034   65605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:16:26.847043   65605 out.go:304] Setting ErrFile to fd 2...
	I0723 15:16:26.847047   65605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:16:26.847236   65605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:16:26.847736   65605 out.go:298] Setting JSON to false
	I0723 15:16:26.848596   65605 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7133,"bootTime":1721740654,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:16:26.848654   65605 start.go:139] virtualization: kvm guest
	I0723 15:16:26.851041   65605 out.go:177] * [old-k8s-version-000272] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:16:26.852369   65605 notify.go:220] Checking for updates...
	I0723 15:16:26.852412   65605 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:16:26.853936   65605 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:16:26.855529   65605 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:16:26.856758   65605 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:16:26.858108   65605 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:16:26.859426   65605 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:16:26.861264   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:16:26.861820   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:16:26.861877   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:16:26.876520   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0723 15:16:26.876890   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:16:26.877446   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:16:26.877475   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:16:26.877792   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:16:26.877964   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:16:26.879994   65605 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0723 15:16:26.881239   65605 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:16:26.881525   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:16:26.881581   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:16:26.895809   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0723 15:16:26.896196   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:16:26.896594   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:16:26.896620   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:16:26.896952   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:16:26.897115   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:16:26.931930   65605 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:16:26.933661   65605 start.go:297] selected driver: kvm2
	I0723 15:16:26.933681   65605 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:16:26.933803   65605 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:16:26.934524   65605 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:16:26.934643   65605 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:16:26.949412   65605 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:16:26.949793   65605 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:16:26.949833   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:16:26.949849   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:16:26.949908   65605 start.go:340] cluster config:
	{Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:16:26.950019   65605 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:16:26.952135   65605 out.go:177] * Starting "old-k8s-version-000272" primary control-plane node in "old-k8s-version-000272" cluster
	I0723 15:16:26.953563   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:16:26.953607   65605 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0723 15:16:26.953617   65605 cache.go:56] Caching tarball of preloaded images
	I0723 15:16:26.953687   65605 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:16:26.953700   65605 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0723 15:16:26.953808   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:16:26.953997   65605 start.go:360] acquireMachinesLock for old-k8s-version-000272: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:20:21.587107   65605 start.go:364] duration metric: took 3m54.633068774s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:20:21.587168   65605 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:21.587179   65605 fix.go:54] fixHost starting: 
	I0723 15:20:21.587596   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:21.587632   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:21.608083   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0723 15:20:21.608563   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:21.609109   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:20:21.609148   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:21.609463   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:21.609679   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:21.609839   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:20:21.611555   65605 fix.go:112] recreateIfNeeded on old-k8s-version-000272: state=Stopped err=<nil>
	I0723 15:20:21.611590   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	W0723 15:20:21.611766   65605 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:21.614168   65605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	I0723 15:20:21.615607   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .Start
	I0723 15:20:21.615831   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:20:21.616640   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:20:21.617122   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:20:21.617591   65605 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:20:21.618346   65605 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:20:22.904910   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:20:22.905969   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:22.906448   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:22.906508   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:22.906424   67094 retry.go:31] will retry after 215.638875ms: waiting for machine to come up
	I0723 15:20:23.124008   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.124474   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.124510   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.124440   67094 retry.go:31] will retry after 380.753429ms: waiting for machine to come up
	I0723 15:20:23.507362   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.507777   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.507803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.507744   67094 retry.go:31] will retry after 385.253161ms: waiting for machine to come up
	I0723 15:20:23.894227   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.894675   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.894697   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.894627   67094 retry.go:31] will retry after 533.715559ms: waiting for machine to come up
	I0723 15:20:24.429811   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:24.430290   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:24.430321   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:24.430242   67094 retry.go:31] will retry after 637.033889ms: waiting for machine to come up
	I0723 15:20:25.068770   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.069313   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.069345   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.069274   67094 retry.go:31] will retry after 796.484567ms: waiting for machine to come up
	I0723 15:20:25.867223   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.867663   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.867693   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.867604   67094 retry.go:31] will retry after 845.920319ms: waiting for machine to come up
	I0723 15:20:26.715077   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:26.715612   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:26.715643   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:26.715566   67094 retry.go:31] will retry after 1.265268276s: waiting for machine to come up
	I0723 15:20:27.982818   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:27.983136   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:27.983157   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:27.983112   67094 retry.go:31] will retry after 1.681215174s: waiting for machine to come up
	I0723 15:20:29.667369   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:29.667816   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:29.667846   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:29.667773   67094 retry.go:31] will retry after 1.742302977s: waiting for machine to come up
	I0723 15:20:31.412567   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:31.413046   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:31.413074   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:31.412990   67094 retry.go:31] will retry after 2.618033682s: waiting for machine to come up
	I0723 15:20:34.034295   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:34.034660   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:34.034682   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:34.034634   67094 retry.go:31] will retry after 2.832404848s: waiting for machine to come up
	I0723 15:20:36.869147   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:36.869555   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:36.869593   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:36.869499   67094 retry.go:31] will retry after 4.334096738s: waiting for machine to come up
	I0723 15:20:41.208992   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209340   65605 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:20:41.209364   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:20:41.209382   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209808   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.209843   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | skip adding static IP to network mk-old-k8s-version-000272 - found existing host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"}
	I0723 15:20:41.209862   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:20:41.209878   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:20:41.209916   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:20:41.211671   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.211918   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.211956   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.212110   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:20:41.212139   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:20:41.212191   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:41.212211   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:20:41.212229   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:20:41.334852   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:41.335260   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:20:41.335965   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.338425   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.338803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.338842   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.339024   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:20:41.339218   65605 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:41.339235   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:41.339476   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.341528   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.341881   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.341909   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.342008   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.342192   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342352   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342502   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.342674   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.342855   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.342865   65605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:41.442564   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:41.442592   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.442857   65605 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:20:41.442872   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.443076   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.445976   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446389   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.446429   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446553   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.446719   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.446972   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.447096   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.447249   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.447418   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.447434   65605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:20:41.559708   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:20:41.559739   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.562630   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.562954   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.562977   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.563156   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.563340   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563501   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563596   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.563779   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.563977   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.564006   65605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:41.671327   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:41.671363   65605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:41.671396   65605 buildroot.go:174] setting up certificates
	I0723 15:20:41.671407   65605 provision.go:84] configureAuth start
	I0723 15:20:41.671418   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.671766   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.674340   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.674812   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.674848   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.675019   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.677052   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677386   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.677418   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677568   65605 provision.go:143] copyHostCerts
	I0723 15:20:41.677636   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:41.677651   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:41.677715   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:41.677826   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:41.677836   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:41.677866   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:41.677939   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:41.677949   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:41.677975   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:41.678039   65605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:20:41.745999   65605 provision.go:177] copyRemoteCerts
	I0723 15:20:41.746077   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:41.746123   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.748908   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749226   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.749252   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749417   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.749616   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.749771   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.749903   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:41.828867   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:41.852296   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:20:41.874579   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:20:41.897065   65605 provision.go:87] duration metric: took 225.644058ms to configureAuth
	I0723 15:20:41.897095   65605 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:41.897287   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:20:41.897354   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.900232   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902335   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.902328   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.902412   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902623   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.902826   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.903015   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.903209   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.903388   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.903407   65605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:42.162998   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:42.163019   65605 machine.go:97] duration metric: took 823.789368ms to provisionDockerMachine
	I0723 15:20:42.163030   65605 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:20:42.163040   65605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:42.163054   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.163444   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:42.163471   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.166193   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.166628   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.166842   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.167037   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.167181   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.248364   65605 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:42.252403   65605 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:42.252433   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:42.252504   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:42.252596   65605 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:42.252693   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:42.262571   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:42.285115   65605 start.go:296] duration metric: took 122.072017ms for postStartSetup
	I0723 15:20:42.285160   65605 fix.go:56] duration metric: took 20.697977265s for fixHost
	I0723 15:20:42.285180   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.287760   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288032   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.288062   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288187   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.288428   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288606   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288799   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.289000   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:42.289216   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:42.289232   65605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0723 15:20:42.386682   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748042.363547028
	
	I0723 15:20:42.386711   65605 fix.go:216] guest clock: 1721748042.363547028
	I0723 15:20:42.386723   65605 fix.go:229] Guest: 2024-07-23 15:20:42.363547028 +0000 UTC Remote: 2024-07-23 15:20:42.285164316 +0000 UTC m=+255.470399434 (delta=78.382712ms)
	I0723 15:20:42.386754   65605 fix.go:200] guest clock delta is within tolerance: 78.382712ms
	I0723 15:20:42.386765   65605 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 20.799620907s
	I0723 15:20:42.386796   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.387067   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:42.390116   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390543   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.390589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390703   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391215   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391395   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391482   65605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:42.391527   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.391645   65605 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:42.391670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.394373   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394732   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.394757   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394924   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395081   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395245   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.395286   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.395331   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.395428   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.395579   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395726   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395963   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.396145   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.499940   65605 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:42.505917   65605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:42.646731   65605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:42.652550   65605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:42.652612   65605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:42.667337   65605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:42.667357   65605 start.go:495] detecting cgroup driver to use...
	I0723 15:20:42.667419   65605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:42.681839   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:42.694833   65605 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:42.694888   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:42.707800   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:42.720914   65605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:42.844082   65605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:43.024993   65605 docker.go:233] disabling docker service ...
	I0723 15:20:43.025076   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:43.057263   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:43.070881   65605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:43.180616   65605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:43.295769   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:43.311341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:43.333719   65605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:20:43.333787   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.345261   65605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:43.345364   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.356669   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.366947   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.378177   65605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:43.390672   65605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:43.400591   65605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:43.400645   65605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:43.413974   65605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:43.423528   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:43.545030   65605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:43.685902   65605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:43.686018   65605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:43.691692   65605 start.go:563] Will wait 60s for crictl version
	I0723 15:20:43.691742   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:43.695470   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:43.733229   65605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:43.733329   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.765591   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.794762   65605 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:20:43.796073   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:43.799075   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799549   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:43.799585   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799780   65605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:43.803604   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:43.818919   65605 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:43.819019   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:20:43.819073   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:43.872208   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:43.872268   65605 ssh_runner.go:195] Run: which lz4
	I0723 15:20:43.876273   65605 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0723 15:20:43.880532   65605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:43.880566   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:20:45.299916   65605 crio.go:462] duration metric: took 1.423681931s to copy over tarball
	I0723 15:20:45.299989   65605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:48.176598   65605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87658172s)
	I0723 15:20:48.176623   65605 crio.go:469] duration metric: took 2.876682557s to extract the tarball
	I0723 15:20:48.176632   65605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:48.221431   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:48.256729   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:48.256750   65605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:20:48.256833   65605 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.256883   65605 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.256906   65605 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.256840   65605 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.256896   65605 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.256841   65605 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.256851   65605 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:20:48.256858   65605 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258836   65605 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.258855   65605 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.258867   65605 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:20:48.258913   65605 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.258840   65605 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258841   65605 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.258842   65605 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.258906   65605 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.548121   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.552098   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.552418   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.560834   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.580417   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:20:48.590031   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.619770   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.633302   65605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:20:48.633365   65605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.633414   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.660305   65605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:20:48.660383   65605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.660439   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.691792   65605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:20:48.691853   65605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.691902   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707832   65605 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:20:48.707867   65605 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:20:48.707901   65605 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:20:48.707917   65605 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.707945   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707957   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.722912   65605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:20:48.722960   65605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.723012   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729754   65605 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:20:48.729792   65605 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.729820   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.729874   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.729826   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729827   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.730025   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:20:48.730037   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.730113   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.848335   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:20:48.849228   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.849310   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:20:48.858540   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:20:48.858650   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:20:48.858711   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:20:48.858750   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:20:48.889577   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:20:49.134808   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:49.273570   65605 cache_images.go:92] duration metric: took 1.016803126s to LoadCachedImages
	W0723 15:20:49.273670   65605 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0723 15:20:49.273686   65605 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:20:49.273808   65605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:49.273902   65605 ssh_runner.go:195] Run: crio config
	I0723 15:20:49.321968   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:20:49.321995   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:49.322007   65605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:49.322028   65605 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:20:49.322208   65605 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:49.322292   65605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:20:49.332563   65605 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:49.332636   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:49.345174   65605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:20:49.364369   65605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:49.379807   65605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:20:49.396643   65605 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:49.400437   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:49.412291   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:49.539360   65605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:49.556165   65605 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:20:49.556198   65605 certs.go:194] generating shared ca certs ...
	I0723 15:20:49.556218   65605 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:49.556393   65605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:49.556448   65605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:49.556457   65605 certs.go:256] generating profile certs ...
	I0723 15:20:49.556574   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:20:49.556652   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:20:49.556699   65605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:20:49.556845   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:49.556900   65605 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:49.556913   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:49.556947   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:49.557001   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:49.557036   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:49.557087   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:49.557993   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:49.605662   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:49.639122   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:49.665264   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:49.691008   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:20:49.723820   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:20:49.750608   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:49.776942   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:49.809923   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:49.834935   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:49.857389   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:49.880619   65605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:49.897369   65605 ssh_runner.go:195] Run: openssl version
	I0723 15:20:49.902878   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:49.913861   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918296   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918359   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.924159   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:49.936081   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:49.947674   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952040   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952090   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.957714   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:49.969333   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:49.981037   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985257   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985303   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.991083   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:50.002977   65605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:50.007497   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:50.013359   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:50.019202   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:50.025182   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:50.030979   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:50.036818   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:50.042573   65605 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:50.042687   65605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:50.042734   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.084635   65605 cri.go:89] found id: ""
	I0723 15:20:50.084714   65605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:50.096501   65605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:50.096521   65605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:50.096585   65605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:50.107443   65605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:50.108742   65605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:20:50.109665   65605 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-000272" cluster setting kubeconfig missing "old-k8s-version-000272" context setting]
	I0723 15:20:50.111089   65605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:50.178975   65605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:50.190920   65605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0723 15:20:50.190961   65605 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:50.190972   65605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:50.191033   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.230879   65605 cri.go:89] found id: ""
	I0723 15:20:50.230972   65605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:50.247994   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:50.257490   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:50.257518   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:50.257576   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:50.266704   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:50.266763   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:50.276276   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:50.285533   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:50.285613   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:50.294642   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.303358   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:50.303414   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.313060   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:50.322294   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:50.322364   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:50.331659   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:50.341120   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:50.460900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.327126   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.576244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.662730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.762087   65605 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:51.762179   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.262683   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.763266   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.263151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.763313   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.262366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.763167   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.263068   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.762864   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.262305   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.762857   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.263221   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.262445   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.762456   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.263288   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.763206   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.263158   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.762517   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.263183   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.762347   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.262289   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.763009   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.262852   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.763260   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.262964   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.762673   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.263335   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.762790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.262830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.762830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.262935   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.762473   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.262990   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.262850   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.762245   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.263207   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.762516   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.263298   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.762853   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.262754   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.762339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.262358   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.762291   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.262339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.762796   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.263008   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.762225   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.263100   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.762356   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.263163   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.762332   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.263184   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.762413   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.263050   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.762396   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.263052   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.763027   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.263244   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.762584   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.262373   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.762746   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.263229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.763195   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.262446   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.762506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.262490   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.263073   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.762900   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.262530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.762666   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.262506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.762908   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.262943   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.763041   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.263200   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.762855   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.262991   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.262345   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.762530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.262472   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.763055   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.262344   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.762962   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.262594   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.762498   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.263210   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.763229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.263268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.763001   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.263263   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.762567   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.262510   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.762366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.263091   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.762546   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.263115   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.762511   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.262868   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.762469   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.262898   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.762342   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.262359   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.763149   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.263062   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.763109   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.262592   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.763170   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.262743   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.762500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.262636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.762397   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.262912   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.763274   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.262631   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.762560   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.262984   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.763131   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:51.763218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:51.804139   65605 cri.go:89] found id: ""
	I0723 15:21:51.804167   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.804177   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:51.804185   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:51.804246   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:51.846025   65605 cri.go:89] found id: ""
	I0723 15:21:51.846052   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.846064   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:51.846070   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:51.846133   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:51.885398   65605 cri.go:89] found id: ""
	I0723 15:21:51.885431   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.885442   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:51.885450   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:51.885514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:51.919587   65605 cri.go:89] found id: ""
	I0723 15:21:51.919618   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.919630   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:51.919637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:51.919723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:51.955301   65605 cri.go:89] found id: ""
	I0723 15:21:51.955335   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.955342   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:51.955348   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:51.955397   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:51.988318   65605 cri.go:89] found id: ""
	I0723 15:21:51.988345   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.988355   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:51.988362   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:51.988419   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:52.023375   65605 cri.go:89] found id: ""
	I0723 15:21:52.023407   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.023418   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:52.023426   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:52.023498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:52.060183   65605 cri.go:89] found id: ""
	I0723 15:21:52.060205   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.060212   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:52.060221   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:52.060233   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:52.109904   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:52.109937   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:52.123292   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:52.123317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:52.253361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:52.253386   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:52.253401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:52.321684   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:52.321720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:54.859846   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:54.873167   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:54.873233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:54.909330   65605 cri.go:89] found id: ""
	I0723 15:21:54.909351   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.909359   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:54.909364   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:54.909412   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:54.943092   65605 cri.go:89] found id: ""
	I0723 15:21:54.943120   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.943131   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:54.943138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:54.943198   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:54.975051   65605 cri.go:89] found id: ""
	I0723 15:21:54.975080   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.975090   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:54.975098   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:54.975172   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:55.017552   65605 cri.go:89] found id: ""
	I0723 15:21:55.017580   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.017590   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:55.017596   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:55.017657   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:55.067857   65605 cri.go:89] found id: ""
	I0723 15:21:55.067887   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.067897   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:55.067903   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:55.067965   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:55.105194   65605 cri.go:89] found id: ""
	I0723 15:21:55.105224   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.105234   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:55.105242   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:55.105312   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:55.174421   65605 cri.go:89] found id: ""
	I0723 15:21:55.174451   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.174463   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:55.174470   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:55.174521   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:55.209007   65605 cri.go:89] found id: ""
	I0723 15:21:55.209032   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.209039   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:55.209048   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:55.209059   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:55.261075   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:55.261110   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:55.273629   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:55.273656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:55.348214   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:55.348237   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:55.348271   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:55.418341   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:55.418371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:57.956565   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:57.969980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:57.970054   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:58.002894   65605 cri.go:89] found id: ""
	I0723 15:21:58.002925   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.002943   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:58.002951   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:58.003018   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:58.034980   65605 cri.go:89] found id: ""
	I0723 15:21:58.035007   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.035017   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:58.035024   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:58.035090   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:58.068666   65605 cri.go:89] found id: ""
	I0723 15:21:58.068694   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.068702   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:58.068708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:58.068757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:58.102693   65605 cri.go:89] found id: ""
	I0723 15:21:58.102727   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.102737   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:58.102744   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:58.102807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:58.137492   65605 cri.go:89] found id: ""
	I0723 15:21:58.137521   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.137530   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:58.137535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:58.137590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:58.173616   65605 cri.go:89] found id: ""
	I0723 15:21:58.173640   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.173647   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:58.173654   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:58.173716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:58.206995   65605 cri.go:89] found id: ""
	I0723 15:21:58.207023   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.207033   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:58.207040   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:58.207100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:58.238476   65605 cri.go:89] found id: ""
	I0723 15:21:58.238504   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.238513   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:58.238525   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:58.238538   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:58.291074   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:58.291104   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:58.305305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:58.305349   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:58.379551   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:58.379572   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:58.379587   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:58.453253   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:58.453293   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:00.994715   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:01.010264   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:01.010359   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:01.065402   65605 cri.go:89] found id: ""
	I0723 15:22:01.065433   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.065443   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:01.065451   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:01.065511   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:01.115626   65605 cri.go:89] found id: ""
	I0723 15:22:01.115655   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.115666   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:01.115675   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:01.115737   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:01.155568   65605 cri.go:89] found id: ""
	I0723 15:22:01.155595   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.155604   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:01.155610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:01.155674   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:01.191076   65605 cri.go:89] found id: ""
	I0723 15:22:01.191102   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.191110   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:01.191116   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:01.191162   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:01.224233   65605 cri.go:89] found id: ""
	I0723 15:22:01.224257   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.224263   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:01.224269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:01.224337   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:01.257321   65605 cri.go:89] found id: ""
	I0723 15:22:01.257344   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.257351   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:01.257357   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:01.257415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:01.289646   65605 cri.go:89] found id: ""
	I0723 15:22:01.289670   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.289678   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:01.289685   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:01.289740   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:01.322672   65605 cri.go:89] found id: ""
	I0723 15:22:01.322703   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.322714   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:01.322725   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:01.322741   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:01.395637   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:01.395674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:01.434548   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:01.434580   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:01.484364   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:01.484396   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:01.497536   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:01.497571   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:01.567570   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.068561   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:04.082660   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:04.082738   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:04.118536   65605 cri.go:89] found id: ""
	I0723 15:22:04.118566   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.118576   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:04.118584   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:04.118642   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:04.154768   65605 cri.go:89] found id: ""
	I0723 15:22:04.154792   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.154802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:04.154809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:04.154854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:04.188426   65605 cri.go:89] found id: ""
	I0723 15:22:04.188456   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.188464   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:04.188469   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:04.188517   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:04.222195   65605 cri.go:89] found id: ""
	I0723 15:22:04.222221   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.222229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:04.222251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:04.222327   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:04.259164   65605 cri.go:89] found id: ""
	I0723 15:22:04.259191   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.259201   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:04.259208   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:04.259275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:04.291500   65605 cri.go:89] found id: ""
	I0723 15:22:04.291527   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.291534   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:04.291541   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:04.291595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:04.326680   65605 cri.go:89] found id: ""
	I0723 15:22:04.326712   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.326722   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:04.326729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:04.326789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:04.358629   65605 cri.go:89] found id: ""
	I0723 15:22:04.358653   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.358662   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:04.358671   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:04.358682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:04.429591   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.429614   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:04.429625   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:04.509841   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:04.509887   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:04.547827   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:04.547852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:04.600857   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:04.600891   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.116541   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:07.129739   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:07.129809   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:07.164541   65605 cri.go:89] found id: ""
	I0723 15:22:07.164573   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.164583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:07.164589   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:07.164651   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:07.202567   65605 cri.go:89] found id: ""
	I0723 15:22:07.202595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.202606   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:07.202613   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:07.202672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:07.238665   65605 cri.go:89] found id: ""
	I0723 15:22:07.238689   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.238698   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:07.238706   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:07.238763   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:07.271216   65605 cri.go:89] found id: ""
	I0723 15:22:07.271246   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.271256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:07.271263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:07.271335   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:07.303566   65605 cri.go:89] found id: ""
	I0723 15:22:07.303595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.303606   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:07.303613   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:07.303672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:07.337927   65605 cri.go:89] found id: ""
	I0723 15:22:07.337951   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.337959   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:07.337965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:07.338023   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:07.373813   65605 cri.go:89] found id: ""
	I0723 15:22:07.373841   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.373852   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:07.373860   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:07.373928   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:07.408301   65605 cri.go:89] found id: ""
	I0723 15:22:07.408326   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.408333   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:07.408340   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:07.408350   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:07.488384   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:07.488417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.531867   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:07.531895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:07.582639   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:07.582671   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.597387   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:07.597413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:07.673185   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.173915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:10.186657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:10.186717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:10.218213   65605 cri.go:89] found id: ""
	I0723 15:22:10.218238   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.218246   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:10.218252   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:10.218302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:10.250199   65605 cri.go:89] found id: ""
	I0723 15:22:10.250228   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.250238   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:10.250245   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:10.250307   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:10.282920   65605 cri.go:89] found id: ""
	I0723 15:22:10.282947   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.282957   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:10.282965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:10.283022   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:10.317334   65605 cri.go:89] found id: ""
	I0723 15:22:10.317363   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.317372   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:10.317380   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:10.317443   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:10.350520   65605 cri.go:89] found id: ""
	I0723 15:22:10.350548   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.350559   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:10.350566   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:10.350630   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:10.381360   65605 cri.go:89] found id: ""
	I0723 15:22:10.381385   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.381392   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:10.381405   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:10.381451   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:10.413202   65605 cri.go:89] found id: ""
	I0723 15:22:10.413231   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.413239   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:10.413244   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:10.413300   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:10.447102   65605 cri.go:89] found id: ""
	I0723 15:22:10.447132   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.447143   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:10.447154   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:10.447168   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:10.496110   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:10.496141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:10.509298   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:10.509331   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:10.578938   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.578960   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:10.578975   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:10.660316   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:10.660346   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.199119   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:13.212070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:13.212129   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:13.247646   65605 cri.go:89] found id: ""
	I0723 15:22:13.247683   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.247694   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:13.247701   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:13.247759   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:13.277875   65605 cri.go:89] found id: ""
	I0723 15:22:13.277901   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.277909   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:13.277918   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:13.277973   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:13.311499   65605 cri.go:89] found id: ""
	I0723 15:22:13.311520   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.311527   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:13.311533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:13.311587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:13.342913   65605 cri.go:89] found id: ""
	I0723 15:22:13.342944   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.342955   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:13.342963   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:13.343020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:13.380062   65605 cri.go:89] found id: ""
	I0723 15:22:13.380085   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.380092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:13.380097   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:13.380148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:13.416683   65605 cri.go:89] found id: ""
	I0723 15:22:13.416712   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.416721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:13.416728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:13.416786   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:13.451783   65605 cri.go:89] found id: ""
	I0723 15:22:13.451806   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.451813   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:13.451819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:13.451864   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:13.490456   65605 cri.go:89] found id: ""
	I0723 15:22:13.490488   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.490500   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:13.490512   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:13.490531   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:13.562391   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:13.562419   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:13.562435   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:13.639271   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:13.639330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.677457   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:13.677486   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:13.727877   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:13.727912   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.242569   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:16.255165   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:16.255237   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:16.286884   65605 cri.go:89] found id: ""
	I0723 15:22:16.286973   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.286990   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:16.286998   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:16.287070   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:16.319480   65605 cri.go:89] found id: ""
	I0723 15:22:16.319508   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.319518   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:16.319524   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:16.319590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:16.356142   65605 cri.go:89] found id: ""
	I0723 15:22:16.356176   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.356186   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:16.356193   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:16.356251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:16.393720   65605 cri.go:89] found id: ""
	I0723 15:22:16.393748   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.393756   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:16.393761   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:16.393817   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:16.429752   65605 cri.go:89] found id: ""
	I0723 15:22:16.429788   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.429800   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:16.429807   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:16.429865   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:16.463983   65605 cri.go:89] found id: ""
	I0723 15:22:16.464012   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.464023   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:16.464030   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:16.464099   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:16.497390   65605 cri.go:89] found id: ""
	I0723 15:22:16.497417   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.497428   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:16.497435   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:16.497496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:16.532460   65605 cri.go:89] found id: ""
	I0723 15:22:16.532491   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.532502   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:16.532513   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:16.532525   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:16.584455   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:16.584492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.599205   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:16.599237   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:16.672183   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:16.672207   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:16.672221   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:16.748888   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:16.748923   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:19.286407   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:19.300815   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:19.300890   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:19.341088   65605 cri.go:89] found id: ""
	I0723 15:22:19.341122   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.341133   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:19.341140   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:19.341191   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:19.375597   65605 cri.go:89] found id: ""
	I0723 15:22:19.375627   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.375635   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:19.375641   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:19.375689   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:19.412206   65605 cri.go:89] found id: ""
	I0723 15:22:19.412234   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.412244   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:19.412252   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:19.412315   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:19.445598   65605 cri.go:89] found id: ""
	I0723 15:22:19.445631   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.445645   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:19.445653   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:19.445725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:19.477766   65605 cri.go:89] found id: ""
	I0723 15:22:19.477800   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.477811   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:19.477818   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:19.477877   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:19.509935   65605 cri.go:89] found id: ""
	I0723 15:22:19.509965   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.509976   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:19.509982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:19.510039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:19.542906   65605 cri.go:89] found id: ""
	I0723 15:22:19.542936   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.542947   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:19.542954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:19.543010   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:19.575935   65605 cri.go:89] found id: ""
	I0723 15:22:19.575964   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.575975   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:19.576036   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:19.576054   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:19.625640   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:19.625674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:19.638938   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:19.638965   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:19.711019   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:19.711047   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:19.711061   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:19.787744   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:19.787781   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.326500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:22.339677   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:22.339741   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:22.374593   65605 cri.go:89] found id: ""
	I0723 15:22:22.374630   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.374641   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:22.374649   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:22.374713   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:22.408064   65605 cri.go:89] found id: ""
	I0723 15:22:22.408089   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.408099   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:22.408106   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:22.408166   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:22.442923   65605 cri.go:89] found id: ""
	I0723 15:22:22.442956   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.442968   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:22.442976   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:22.443038   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:22.476003   65605 cri.go:89] found id: ""
	I0723 15:22:22.476027   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.476036   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:22.476043   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:22.476109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:22.508221   65605 cri.go:89] found id: ""
	I0723 15:22:22.508253   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.508260   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:22.508268   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:22.508328   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:22.540748   65605 cri.go:89] found id: ""
	I0723 15:22:22.540778   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.540789   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:22.540797   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:22.540857   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:22.576000   65605 cri.go:89] found id: ""
	I0723 15:22:22.576028   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.576038   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:22.576044   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:22.576102   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:22.614295   65605 cri.go:89] found id: ""
	I0723 15:22:22.614325   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.614335   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:22.614346   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:22.614361   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:22.627447   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:22.627481   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:22.701142   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:22.701172   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:22.701188   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:22.788487   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:22.788523   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.831107   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:22.831136   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.382886   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:25.396072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:25.396147   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:25.432414   65605 cri.go:89] found id: ""
	I0723 15:22:25.432443   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.432454   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:25.432482   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:25.432554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:25.466375   65605 cri.go:89] found id: ""
	I0723 15:22:25.466421   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.466429   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:25.466434   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:25.466488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:25.502512   65605 cri.go:89] found id: ""
	I0723 15:22:25.502536   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.502545   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:25.502553   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:25.502624   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:25.535953   65605 cri.go:89] found id: ""
	I0723 15:22:25.535975   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.535984   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:25.535991   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:25.536051   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:25.569217   65605 cri.go:89] found id: ""
	I0723 15:22:25.569250   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.569261   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:25.569269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:25.569331   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:25.602317   65605 cri.go:89] found id: ""
	I0723 15:22:25.602341   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.602350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:25.602360   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:25.602433   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:25.636959   65605 cri.go:89] found id: ""
	I0723 15:22:25.636984   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.636994   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:25.637001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:25.637059   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:25.671719   65605 cri.go:89] found id: ""
	I0723 15:22:25.671753   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.671764   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:25.671775   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:25.671789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.720509   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:25.720540   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:25.733097   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:25.733121   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:25.809365   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:25.809393   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:25.809409   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:25.890663   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:25.890700   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:28.430884   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:28.444825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:28.444882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:28.477510   65605 cri.go:89] found id: ""
	I0723 15:22:28.477533   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.477540   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:28.477546   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:28.477611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:28.515395   65605 cri.go:89] found id: ""
	I0723 15:22:28.515424   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.515434   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:28.515440   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:28.515498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:28.554144   65605 cri.go:89] found id: ""
	I0723 15:22:28.554169   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.554176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:28.554185   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:28.554239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:28.588756   65605 cri.go:89] found id: ""
	I0723 15:22:28.588783   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.588794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:28.588801   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:28.588861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:28.623278   65605 cri.go:89] found id: ""
	I0723 15:22:28.623305   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.623313   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:28.623318   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:28.623372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:28.666802   65605 cri.go:89] found id: ""
	I0723 15:22:28.666831   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.666840   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:28.666847   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:28.666906   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:28.697712   65605 cri.go:89] found id: ""
	I0723 15:22:28.697736   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.697744   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:28.697749   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:28.697803   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:28.730296   65605 cri.go:89] found id: ""
	I0723 15:22:28.730333   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.730340   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:28.730349   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:28.730360   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.779381   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:28.779417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:28.792687   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:28.792718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:28.859483   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:28.859508   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:28.859537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:28.933792   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:28.933824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.474653   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:31.488537   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:31.488602   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:31.522785   65605 cri.go:89] found id: ""
	I0723 15:22:31.522816   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.522826   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:31.522834   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:31.522901   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:31.554448   65605 cri.go:89] found id: ""
	I0723 15:22:31.554493   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.554503   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:31.554508   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:31.554568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:31.587456   65605 cri.go:89] found id: ""
	I0723 15:22:31.587479   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.587486   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:31.587492   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:31.587549   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:31.625604   65605 cri.go:89] found id: ""
	I0723 15:22:31.625632   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.625640   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:31.625646   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:31.625696   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:31.661266   65605 cri.go:89] found id: ""
	I0723 15:22:31.661298   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.661304   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:31.661309   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:31.661364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:31.696942   65605 cri.go:89] found id: ""
	I0723 15:22:31.696974   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.696984   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:31.696992   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:31.697055   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:31.730706   65605 cri.go:89] found id: ""
	I0723 15:22:31.730730   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.730738   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:31.730743   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:31.730789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:31.762778   65605 cri.go:89] found id: ""
	I0723 15:22:31.762802   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.762810   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:31.762818   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:31.762829   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.804789   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:31.804814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:31.854481   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:31.854514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:31.867003   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:31.867028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:31.942544   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:31.942565   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:31.942576   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.519437   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:34.531879   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:34.531941   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:34.565547   65605 cri.go:89] found id: ""
	I0723 15:22:34.565572   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.565580   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:34.565585   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:34.565634   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:34.597865   65605 cri.go:89] found id: ""
	I0723 15:22:34.597892   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.597902   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:34.597908   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:34.597968   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:34.633153   65605 cri.go:89] found id: ""
	I0723 15:22:34.633176   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.633185   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:34.633192   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:34.633251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:34.668464   65605 cri.go:89] found id: ""
	I0723 15:22:34.668486   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.668496   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:34.668502   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:34.668573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:34.700358   65605 cri.go:89] found id: ""
	I0723 15:22:34.700401   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.700412   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:34.700422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:34.700495   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:34.731774   65605 cri.go:89] found id: ""
	I0723 15:22:34.731807   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.731819   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:34.731828   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:34.731902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:34.764204   65605 cri.go:89] found id: ""
	I0723 15:22:34.764232   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.764243   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:34.764251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:34.764311   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:34.794103   65605 cri.go:89] found id: ""
	I0723 15:22:34.794131   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.794139   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:34.794149   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:34.794165   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:34.868038   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:34.868063   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:34.868076   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.958254   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:34.958291   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:35.004649   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:35.004681   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:35.055496   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:35.055537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.569938   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:37.582561   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:37.582629   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:37.613053   65605 cri.go:89] found id: ""
	I0723 15:22:37.613081   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.613090   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:37.613096   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:37.613161   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:37.649239   65605 cri.go:89] found id: ""
	I0723 15:22:37.649270   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.649279   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:37.649286   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:37.649372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:37.685110   65605 cri.go:89] found id: ""
	I0723 15:22:37.685137   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.685145   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:37.685150   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:37.685201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:37.718210   65605 cri.go:89] found id: ""
	I0723 15:22:37.718231   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.718239   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:37.718245   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:37.718297   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:37.751192   65605 cri.go:89] found id: ""
	I0723 15:22:37.751224   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.751234   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:37.751241   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:37.751294   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:37.781569   65605 cri.go:89] found id: ""
	I0723 15:22:37.781597   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.781607   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:37.781614   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:37.781680   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:37.812886   65605 cri.go:89] found id: ""
	I0723 15:22:37.812916   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.812927   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:37.812934   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:37.812994   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:37.844065   65605 cri.go:89] found id: ""
	I0723 15:22:37.844094   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.844104   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:37.844114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:37.844128   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.857216   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:37.857244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:37.926781   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:37.926807   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:37.926824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:38.007510   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:38.007544   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:38.045404   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:38.045437   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:40.594590   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:40.607099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:40.607157   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:40.660888   65605 cri.go:89] found id: ""
	I0723 15:22:40.660915   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.660926   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:40.660933   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:40.660992   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:40.698276   65605 cri.go:89] found id: ""
	I0723 15:22:40.698302   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.698310   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:40.698317   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:40.698411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:40.733515   65605 cri.go:89] found id: ""
	I0723 15:22:40.733542   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.733552   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:40.733560   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:40.733619   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:40.765501   65605 cri.go:89] found id: ""
	I0723 15:22:40.765530   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.765541   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:40.765548   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:40.765600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:40.800660   65605 cri.go:89] found id: ""
	I0723 15:22:40.800686   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.800693   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:40.800698   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:40.800744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:40.836084   65605 cri.go:89] found id: ""
	I0723 15:22:40.836111   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.836119   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:40.836125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:40.836179   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:40.872567   65605 cri.go:89] found id: ""
	I0723 15:22:40.872593   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.872601   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:40.872607   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:40.872665   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:40.907965   65605 cri.go:89] found id: ""
	I0723 15:22:40.907995   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.908006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:40.908017   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:40.908032   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:40.977078   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:40.977105   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:40.977124   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:41.059589   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:41.059634   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:41.097934   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:41.097968   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:41.151322   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:41.151365   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:43.665956   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:43.678808   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:43.678882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:43.711311   65605 cri.go:89] found id: ""
	I0723 15:22:43.711346   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.711356   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:43.711363   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:43.711415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:43.745203   65605 cri.go:89] found id: ""
	I0723 15:22:43.745226   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.745233   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:43.745239   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:43.745303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:43.778815   65605 cri.go:89] found id: ""
	I0723 15:22:43.778851   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.778861   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:43.778868   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:43.778926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:43.812497   65605 cri.go:89] found id: ""
	I0723 15:22:43.812528   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.812538   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:43.812544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:43.812595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:43.849568   65605 cri.go:89] found id: ""
	I0723 15:22:43.849595   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.849607   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:43.849621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:43.849784   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:43.883486   65605 cri.go:89] found id: ""
	I0723 15:22:43.883515   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.883527   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:43.883535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:43.883603   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:43.917301   65605 cri.go:89] found id: ""
	I0723 15:22:43.917321   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.917328   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:43.917333   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:43.917388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:43.951808   65605 cri.go:89] found id: ""
	I0723 15:22:43.951835   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.951844   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:43.951853   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:43.951864   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:44.001416   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:44.001448   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:44.014680   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:44.014708   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:44.086008   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:44.086033   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:44.086048   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:44.174647   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:44.174679   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:46.716916   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:46.730403   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:46.730473   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:46.765297   65605 cri.go:89] found id: ""
	I0723 15:22:46.765332   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.765348   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:46.765355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:46.765417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:46.798193   65605 cri.go:89] found id: ""
	I0723 15:22:46.798225   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.798235   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:46.798242   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:46.798309   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:46.830977   65605 cri.go:89] found id: ""
	I0723 15:22:46.831003   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.831015   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:46.831022   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:46.831093   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:46.867414   65605 cri.go:89] found id: ""
	I0723 15:22:46.867441   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.867452   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:46.867459   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:46.867524   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:46.903782   65605 cri.go:89] found id: ""
	I0723 15:22:46.903810   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.903823   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:46.903830   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:46.903912   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:46.936451   65605 cri.go:89] found id: ""
	I0723 15:22:46.936479   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.936486   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:46.936491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:46.936538   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:46.970263   65605 cri.go:89] found id: ""
	I0723 15:22:46.970289   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.970297   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:46.970302   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:46.970370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:47.005023   65605 cri.go:89] found id: ""
	I0723 15:22:47.005055   65605 logs.go:276] 0 containers: []
	W0723 15:22:47.005065   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:47.005074   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:47.005087   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:47.102350   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:47.102398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:47.102432   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:47.194243   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:47.194277   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:47.235510   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:47.235543   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:47.285177   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:47.285208   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:49.799825   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:49.813159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:49.813218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:49.844937   65605 cri.go:89] found id: ""
	I0723 15:22:49.844966   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.844974   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:49.844979   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:49.845039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:49.880236   65605 cri.go:89] found id: ""
	I0723 15:22:49.880265   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.880276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:49.880283   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:49.880344   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:49.914260   65605 cri.go:89] found id: ""
	I0723 15:22:49.914289   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.914298   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:49.914306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:49.914360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:49.948948   65605 cri.go:89] found id: ""
	I0723 15:22:49.948979   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.948987   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:49.948994   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:49.949049   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:49.982841   65605 cri.go:89] found id: ""
	I0723 15:22:49.982867   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.982876   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:49.982881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:49.982926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:50.018255   65605 cri.go:89] found id: ""
	I0723 15:22:50.018286   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.018297   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:50.018315   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:50.018366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:50.054476   65605 cri.go:89] found id: ""
	I0723 15:22:50.054505   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.054515   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:50.054521   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:50.054582   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:50.088017   65605 cri.go:89] found id: ""
	I0723 15:22:50.088050   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.088060   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:50.088072   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:50.088086   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:50.140460   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:50.140494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:50.155334   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:50.155371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:50.230361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:50.230401   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:50.230419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:50.307742   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:50.307789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:52.847520   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:52.868334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:52.868400   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:52.905903   65605 cri.go:89] found id: ""
	I0723 15:22:52.905930   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.905941   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:52.905948   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:52.906006   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:52.940644   65605 cri.go:89] found id: ""
	I0723 15:22:52.940672   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.940683   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:52.940690   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:52.940752   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:52.973581   65605 cri.go:89] found id: ""
	I0723 15:22:52.973607   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.973615   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:52.973621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:52.973682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:53.007004   65605 cri.go:89] found id: ""
	I0723 15:22:53.007032   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.007040   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:53.007046   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:53.007100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:53.040346   65605 cri.go:89] found id: ""
	I0723 15:22:53.040374   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.040385   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:53.040392   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:53.040455   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:53.073620   65605 cri.go:89] found id: ""
	I0723 15:22:53.073653   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.073662   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:53.073668   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:53.073717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:53.108895   65605 cri.go:89] found id: ""
	I0723 15:22:53.108929   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.108941   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:53.108949   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:53.109014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:53.144145   65605 cri.go:89] found id: ""
	I0723 15:22:53.144171   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.144179   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:53.144190   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:53.144207   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:53.181580   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:53.181617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:53.235261   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:53.235292   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:53.249317   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:53.249352   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:53.317382   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:53.317403   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:53.317419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:55.899766   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:55.913612   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:55.913685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:55.945832   65605 cri.go:89] found id: ""
	I0723 15:22:55.945865   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.945877   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:55.945884   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:55.945939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:55.977489   65605 cri.go:89] found id: ""
	I0723 15:22:55.977522   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.977533   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:55.977546   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:55.977607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:56.011727   65605 cri.go:89] found id: ""
	I0723 15:22:56.011758   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.011770   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:56.011781   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:56.011850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:56.044046   65605 cri.go:89] found id: ""
	I0723 15:22:56.044076   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.044086   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:56.044093   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:56.044148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:56.078615   65605 cri.go:89] found id: ""
	I0723 15:22:56.078638   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.078644   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:56.078649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:56.078702   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:56.112720   65605 cri.go:89] found id: ""
	I0723 15:22:56.112746   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.112754   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:56.112759   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:56.112807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:56.146436   65605 cri.go:89] found id: ""
	I0723 15:22:56.146464   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.146475   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:56.146483   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:56.146545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:56.179819   65605 cri.go:89] found id: ""
	I0723 15:22:56.179850   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.179859   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:56.179868   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:56.179885   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:56.219608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:56.219636   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:56.268158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:56.268192   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:56.281422   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:56.281449   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:56.351169   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:56.351190   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:56.351206   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:58.933585   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:58.946516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:58.946607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:58.980970   65605 cri.go:89] found id: ""
	I0723 15:22:58.980994   65605 logs.go:276] 0 containers: []
	W0723 15:22:58.981004   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:58.981012   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:58.981083   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:59.019301   65605 cri.go:89] found id: ""
	I0723 15:22:59.019337   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.019352   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:59.019360   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:59.019417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:59.053653   65605 cri.go:89] found id: ""
	I0723 15:22:59.053677   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.053685   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:59.053690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:59.053745   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:59.086737   65605 cri.go:89] found id: ""
	I0723 15:22:59.086764   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.086772   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:59.086778   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:59.086833   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:59.120689   65605 cri.go:89] found id: ""
	I0723 15:22:59.120717   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.120725   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:59.120731   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:59.120793   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:59.157267   65605 cri.go:89] found id: ""
	I0723 15:22:59.157305   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.157313   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:59.157319   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:59.157370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:59.193432   65605 cri.go:89] found id: ""
	I0723 15:22:59.193457   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.193468   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:59.193474   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:59.193518   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:59.227501   65605 cri.go:89] found id: ""
	I0723 15:22:59.227528   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.227535   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:59.227544   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:59.227555   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:59.314420   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:59.314465   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:59.354311   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:59.354354   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:59.406158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:59.406189   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:59.419244   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:59.419270   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:59.494399   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:01.995403   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:02.008395   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:02.008459   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:02.041952   65605 cri.go:89] found id: ""
	I0723 15:23:02.041979   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.041989   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:02.041995   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:02.042061   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:02.079353   65605 cri.go:89] found id: ""
	I0723 15:23:02.079383   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.079390   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:02.079397   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:02.079453   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:02.114222   65605 cri.go:89] found id: ""
	I0723 15:23:02.114251   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.114261   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:02.114269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:02.114350   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:02.146563   65605 cri.go:89] found id: ""
	I0723 15:23:02.146591   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.146603   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:02.146610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:02.146675   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:02.184401   65605 cri.go:89] found id: ""
	I0723 15:23:02.184428   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.184436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:02.184442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:02.184489   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:02.221304   65605 cri.go:89] found id: ""
	I0723 15:23:02.221339   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.221350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:02.221358   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:02.221424   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:02.266255   65605 cri.go:89] found id: ""
	I0723 15:23:02.266280   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.266288   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:02.266308   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:02.266364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:02.302038   65605 cri.go:89] found id: ""
	I0723 15:23:02.302064   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.302075   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:02.302085   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:02.302102   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.352709   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:02.352743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:02.366113   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:02.366141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:02.433621   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:02.433658   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:02.433674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:02.512443   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:02.512479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.051227   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:05.063634   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:05.063704   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:05.099833   65605 cri.go:89] found id: ""
	I0723 15:23:05.099862   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.099872   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:05.099880   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:05.099942   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:05.136009   65605 cri.go:89] found id: ""
	I0723 15:23:05.136030   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.136036   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:05.136042   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:05.136089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:05.171390   65605 cri.go:89] found id: ""
	I0723 15:23:05.171423   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.171434   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:05.171441   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:05.171497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:05.210193   65605 cri.go:89] found id: ""
	I0723 15:23:05.210220   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.210229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:05.210236   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:05.210318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:05.243266   65605 cri.go:89] found id: ""
	I0723 15:23:05.243290   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.243298   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:05.243304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:05.243368   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:05.273795   65605 cri.go:89] found id: ""
	I0723 15:23:05.273826   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.273835   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:05.273842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:05.273918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:05.305498   65605 cri.go:89] found id: ""
	I0723 15:23:05.305521   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.305528   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:05.305533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:05.305587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:05.337867   65605 cri.go:89] found id: ""
	I0723 15:23:05.337894   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.337905   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:05.337917   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:05.337934   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:05.353531   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:05.353564   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:05.419605   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:05.419630   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:05.419644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:05.503361   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:05.503395   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.539514   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:05.539547   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.091151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:08.103930   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:08.104007   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:08.136853   65605 cri.go:89] found id: ""
	I0723 15:23:08.136874   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.136881   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:08.136887   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:08.136940   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:08.171525   65605 cri.go:89] found id: ""
	I0723 15:23:08.171556   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.171577   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:08.171584   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:08.171652   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:08.205887   65605 cri.go:89] found id: ""
	I0723 15:23:08.205919   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.205930   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:08.205940   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:08.206001   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:08.238304   65605 cri.go:89] found id: ""
	I0723 15:23:08.238329   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.238337   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:08.238342   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:08.238411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:08.270162   65605 cri.go:89] found id: ""
	I0723 15:23:08.270194   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.270203   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:08.270211   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:08.270273   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:08.312963   65605 cri.go:89] found id: ""
	I0723 15:23:08.312991   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.312999   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:08.313005   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:08.313065   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:08.345211   65605 cri.go:89] found id: ""
	I0723 15:23:08.345246   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.345258   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:08.345267   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:08.345326   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:08.381355   65605 cri.go:89] found id: ""
	I0723 15:23:08.381390   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.381399   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:08.381409   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:08.381421   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.436680   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:08.436718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:08.450210   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:08.450245   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:08.517469   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:08.517490   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:08.517504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:08.603147   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:08.603185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:11.142363   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:11.158204   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:11.158278   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:11.197181   65605 cri.go:89] found id: ""
	I0723 15:23:11.197211   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.197227   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:11.197234   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:11.197302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:11.232698   65605 cri.go:89] found id: ""
	I0723 15:23:11.232726   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.232736   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:11.232742   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:11.232801   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:11.263268   65605 cri.go:89] found id: ""
	I0723 15:23:11.263293   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.263301   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:11.263306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:11.263363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:11.294213   65605 cri.go:89] found id: ""
	I0723 15:23:11.294242   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.294254   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:11.294261   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:11.294340   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:11.324721   65605 cri.go:89] found id: ""
	I0723 15:23:11.324753   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.324766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:11.324773   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:11.324834   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:11.356563   65605 cri.go:89] found id: ""
	I0723 15:23:11.356595   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.356606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:11.356620   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:11.356685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:11.387818   65605 cri.go:89] found id: ""
	I0723 15:23:11.387850   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.387859   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:11.387866   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:11.387926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:11.422612   65605 cri.go:89] found id: ""
	I0723 15:23:11.422639   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.422649   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:11.422659   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:11.422672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:11.475997   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:11.476028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:11.489064   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:11.489095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:11.557384   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:11.557408   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:11.557427   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:11.636906   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:11.636933   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.176790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:14.190898   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:14.190972   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:14.225264   65605 cri.go:89] found id: ""
	I0723 15:23:14.225297   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.225308   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:14.225314   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:14.225378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:14.257092   65605 cri.go:89] found id: ""
	I0723 15:23:14.257119   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.257132   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:14.257138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:14.257201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:14.291068   65605 cri.go:89] found id: ""
	I0723 15:23:14.291095   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.291104   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:14.291111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:14.291170   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:14.324840   65605 cri.go:89] found id: ""
	I0723 15:23:14.324872   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.324881   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:14.324888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:14.324948   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:14.358228   65605 cri.go:89] found id: ""
	I0723 15:23:14.358258   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.358268   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:14.358275   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:14.358333   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:14.389136   65605 cri.go:89] found id: ""
	I0723 15:23:14.389164   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.389174   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:14.389181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:14.389241   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:14.424386   65605 cri.go:89] found id: ""
	I0723 15:23:14.424413   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.424424   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:14.424432   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:14.424492   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:14.457206   65605 cri.go:89] found id: ""
	I0723 15:23:14.457234   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.457244   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:14.457254   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:14.457265   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:14.535708   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:14.535742   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.573579   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:14.573603   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:14.627966   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:14.627994   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:14.641305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:14.641332   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:14.723499   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.224268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:17.236467   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:17.236530   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:17.269668   65605 cri.go:89] found id: ""
	I0723 15:23:17.269697   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.269704   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:17.269709   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:17.269753   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:17.300573   65605 cri.go:89] found id: ""
	I0723 15:23:17.300596   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.300603   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:17.300608   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:17.300655   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:17.332627   65605 cri.go:89] found id: ""
	I0723 15:23:17.332653   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.332661   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:17.332666   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:17.332716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:17.363759   65605 cri.go:89] found id: ""
	I0723 15:23:17.363786   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.363794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:17.363799   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:17.363854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:17.396986   65605 cri.go:89] found id: ""
	I0723 15:23:17.397016   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.397023   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:17.397031   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:17.397089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:17.435454   65605 cri.go:89] found id: ""
	I0723 15:23:17.435478   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.435488   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:17.435495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:17.435551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:17.469529   65605 cri.go:89] found id: ""
	I0723 15:23:17.469570   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.469581   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:17.469589   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:17.469654   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:17.505356   65605 cri.go:89] found id: ""
	I0723 15:23:17.505384   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.505395   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:17.505405   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:17.505420   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:17.548656   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:17.548682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:17.602439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:17.602471   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:17.614872   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:17.614902   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:17.684914   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.684939   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:17.684958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.271384   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:20.284619   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:20.284682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:20.319522   65605 cri.go:89] found id: ""
	I0723 15:23:20.319545   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.319552   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:20.319557   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:20.319608   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:20.357359   65605 cri.go:89] found id: ""
	I0723 15:23:20.357385   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.357393   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:20.357399   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:20.357444   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:20.390651   65605 cri.go:89] found id: ""
	I0723 15:23:20.390680   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.390692   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:20.390699   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:20.390757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:20.425243   65605 cri.go:89] found id: ""
	I0723 15:23:20.425274   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.425288   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:20.425295   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:20.425367   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:20.459665   65605 cri.go:89] found id: ""
	I0723 15:23:20.459687   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.459694   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:20.459700   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:20.459749   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:20.494836   65605 cri.go:89] found id: ""
	I0723 15:23:20.494869   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.494879   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:20.494887   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:20.494946   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:20.528807   65605 cri.go:89] found id: ""
	I0723 15:23:20.528839   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.528847   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:20.528854   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:20.528904   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:20.563111   65605 cri.go:89] found id: ""
	I0723 15:23:20.563139   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.563148   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:20.563160   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:20.563175   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:20.576259   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:20.576290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:20.641528   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:20.641551   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:20.641565   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.717413   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:20.717452   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:20.756832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:20.756858   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.308839   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:23.322122   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:23.322203   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:23.353454   65605 cri.go:89] found id: ""
	I0723 15:23:23.353483   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.353491   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:23.353496   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:23.353550   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:23.385194   65605 cri.go:89] found id: ""
	I0723 15:23:23.385218   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.385226   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:23.385231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:23.385286   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:23.420259   65605 cri.go:89] found id: ""
	I0723 15:23:23.420287   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.420295   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:23.420301   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:23.420366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:23.453107   65605 cri.go:89] found id: ""
	I0723 15:23:23.453134   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.453145   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:23.453152   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:23.453208   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:23.485147   65605 cri.go:89] found id: ""
	I0723 15:23:23.485178   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.485185   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:23.485191   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:23.485239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:23.516682   65605 cri.go:89] found id: ""
	I0723 15:23:23.516709   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.516721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:23.516729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:23.516855   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:23.552804   65605 cri.go:89] found id: ""
	I0723 15:23:23.552836   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.552846   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:23.552853   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:23.552916   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:23.585951   65605 cri.go:89] found id: ""
	I0723 15:23:23.585977   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.585988   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:23.586000   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:23.586014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.641439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:23.641469   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:23.655213   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:23.655243   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:23.726461   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:23.726482   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:23.726496   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:23.806530   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:23.806572   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.346727   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:26.359785   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:26.359854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:26.394547   65605 cri.go:89] found id: ""
	I0723 15:23:26.394583   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.394593   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:26.394600   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:26.394660   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:26.429602   65605 cri.go:89] found id: ""
	I0723 15:23:26.429632   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.429640   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:26.429646   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:26.429735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:26.461875   65605 cri.go:89] found id: ""
	I0723 15:23:26.461902   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.461909   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:26.461916   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:26.461987   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:26.494721   65605 cri.go:89] found id: ""
	I0723 15:23:26.494743   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.494751   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:26.494756   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:26.494802   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:26.530828   65605 cri.go:89] found id: ""
	I0723 15:23:26.530854   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.530863   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:26.530871   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:26.530939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:26.564508   65605 cri.go:89] found id: ""
	I0723 15:23:26.564540   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.564551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:26.564558   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:26.564618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:26.599354   65605 cri.go:89] found id: ""
	I0723 15:23:26.599378   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.599387   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:26.599393   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:26.599460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:26.654360   65605 cri.go:89] found id: ""
	I0723 15:23:26.654409   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.654420   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:26.654429   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:26.654446   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:26.722180   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:26.722212   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:26.722226   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:26.803291   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:26.803324   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.842829   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:26.842860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:26.896814   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:26.896854   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.411463   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:29.424509   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:29.424574   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:29.458014   65605 cri.go:89] found id: ""
	I0723 15:23:29.458042   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.458049   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:29.458055   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:29.458108   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:29.492762   65605 cri.go:89] found id: ""
	I0723 15:23:29.492792   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.492802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:29.492809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:29.492862   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:29.526807   65605 cri.go:89] found id: ""
	I0723 15:23:29.526840   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.526851   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:29.526858   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:29.526922   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:29.560110   65605 cri.go:89] found id: ""
	I0723 15:23:29.560133   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.560140   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:29.560146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:29.560195   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:29.596287   65605 cri.go:89] found id: ""
	I0723 15:23:29.596317   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.596327   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:29.596334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:29.596389   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:29.629292   65605 cri.go:89] found id: ""
	I0723 15:23:29.629338   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.629345   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:29.629353   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:29.629404   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:29.666018   65605 cri.go:89] found id: ""
	I0723 15:23:29.666048   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.666058   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:29.666065   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:29.666131   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:29.699967   65605 cri.go:89] found id: ""
	I0723 15:23:29.699996   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.700006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:29.700018   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:29.700034   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:29.749759   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:29.749792   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.763116   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:29.763142   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:29.836309   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:29.836332   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:29.836343   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:29.916337   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:29.916371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.463927   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:32.477072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:32.477150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:32.509915   65605 cri.go:89] found id: ""
	I0723 15:23:32.509938   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.509945   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:32.509952   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:32.510000   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:32.543302   65605 cri.go:89] found id: ""
	I0723 15:23:32.543344   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.543360   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:32.543368   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:32.543438   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:32.579516   65605 cri.go:89] found id: ""
	I0723 15:23:32.579544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.579555   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:32.579562   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:32.579621   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:32.613175   65605 cri.go:89] found id: ""
	I0723 15:23:32.613210   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.613218   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:32.613224   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:32.613282   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:32.646801   65605 cri.go:89] found id: ""
	I0723 15:23:32.646826   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.646835   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:32.646842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:32.646902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:32.683518   65605 cri.go:89] found id: ""
	I0723 15:23:32.683544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.683551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:32.683556   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:32.683611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:32.719448   65605 cri.go:89] found id: ""
	I0723 15:23:32.719475   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.719485   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:32.719490   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:32.719568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:32.752706   65605 cri.go:89] found id: ""
	I0723 15:23:32.752731   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.752738   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:32.752747   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:32.752757   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.800191   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:32.800220   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:32.850990   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:32.851025   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:32.863700   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:32.863729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:32.928054   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:32.928080   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:32.928095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:35.507452   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:35.520681   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:35.520760   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:35.554642   65605 cri.go:89] found id: ""
	I0723 15:23:35.554668   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.554680   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:35.554687   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:35.554750   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:35.585970   65605 cri.go:89] found id: ""
	I0723 15:23:35.585994   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.586004   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:35.586011   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:35.586069   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:35.625178   65605 cri.go:89] found id: ""
	I0723 15:23:35.625202   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.625212   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:35.625226   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:35.625274   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:35.658618   65605 cri.go:89] found id: ""
	I0723 15:23:35.658647   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.658666   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:35.658682   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:35.658742   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:35.696724   65605 cri.go:89] found id: ""
	I0723 15:23:35.696760   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.696768   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:35.696774   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:35.696825   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:35.728399   65605 cri.go:89] found id: ""
	I0723 15:23:35.728426   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.728435   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:35.728440   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:35.728496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:35.758374   65605 cri.go:89] found id: ""
	I0723 15:23:35.758419   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.758429   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:35.758436   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:35.758497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:35.789013   65605 cri.go:89] found id: ""
	I0723 15:23:35.789041   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.789050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:35.789058   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:35.789069   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:35.843703   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:35.843739   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:35.856489   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:35.856514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:35.926784   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:35.926804   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:35.926819   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:36.009552   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:36.009591   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.545830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:38.560412   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:38.560491   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:38.596495   65605 cri.go:89] found id: ""
	I0723 15:23:38.596521   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.596532   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:38.596538   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:38.596587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:38.635068   65605 cri.go:89] found id: ""
	I0723 15:23:38.635095   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.635104   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:38.635109   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:38.635180   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:38.675832   65605 cri.go:89] found id: ""
	I0723 15:23:38.675876   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.675891   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:38.675897   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:38.675956   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:38.711052   65605 cri.go:89] found id: ""
	I0723 15:23:38.711080   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.711100   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:38.711108   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:38.711171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:38.749437   65605 cri.go:89] found id: ""
	I0723 15:23:38.749479   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.749490   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:38.749498   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:38.749554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:38.790721   65605 cri.go:89] found id: ""
	I0723 15:23:38.790743   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.790751   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:38.790758   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:38.790818   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:38.840127   65605 cri.go:89] found id: ""
	I0723 15:23:38.840156   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.840167   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:38.840174   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:38.840233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:38.895252   65605 cri.go:89] found id: ""
	I0723 15:23:38.895281   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.895291   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:38.895301   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:38.895317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.933441   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:38.933479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:38.987128   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:38.987160   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:39.001547   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:39.001578   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:39.070363   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:39.070398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:39.070413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:41.648668   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:41.664247   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:41.664303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:41.697926   65605 cri.go:89] found id: ""
	I0723 15:23:41.697954   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.697962   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:41.697967   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:41.698014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:41.735306   65605 cri.go:89] found id: ""
	I0723 15:23:41.735336   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.735347   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:41.735355   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:41.735413   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:41.773005   65605 cri.go:89] found id: ""
	I0723 15:23:41.773030   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.773040   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:41.773047   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:41.773105   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:41.806683   65605 cri.go:89] found id: ""
	I0723 15:23:41.806711   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.806722   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:41.806729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:41.806779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:41.842021   65605 cri.go:89] found id: ""
	I0723 15:23:41.842047   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.842063   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:41.842070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:41.842130   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:41.874772   65605 cri.go:89] found id: ""
	I0723 15:23:41.874802   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.874812   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:41.874819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:41.874883   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:41.908618   65605 cri.go:89] found id: ""
	I0723 15:23:41.908643   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.908651   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:41.908656   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:41.908705   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:41.942529   65605 cri.go:89] found id: ""
	I0723 15:23:41.942562   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.942573   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:41.942586   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:41.942601   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:41.995763   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:41.995820   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:42.009263   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:42.009290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:42.076948   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:42.076970   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:42.076989   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:42.157399   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:42.157442   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:44.699439   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:44.712779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:44.712850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:44.746666   65605 cri.go:89] found id: ""
	I0723 15:23:44.746692   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.746701   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:44.746713   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:44.746775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:44.780144   65605 cri.go:89] found id: ""
	I0723 15:23:44.780171   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.780178   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:44.780184   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:44.780240   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:44.816646   65605 cri.go:89] found id: ""
	I0723 15:23:44.816676   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.816688   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:44.816696   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:44.816830   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:44.848830   65605 cri.go:89] found id: ""
	I0723 15:23:44.848860   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.848873   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:44.848880   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:44.848945   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:44.882216   65605 cri.go:89] found id: ""
	I0723 15:23:44.882252   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.882265   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:44.882274   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:44.882363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:44.915894   65605 cri.go:89] found id: ""
	I0723 15:23:44.915921   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.915930   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:44.915937   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:44.916003   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:44.948902   65605 cri.go:89] found id: ""
	I0723 15:23:44.948936   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.948954   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:44.948964   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:44.949034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:44.981658   65605 cri.go:89] found id: ""
	I0723 15:23:44.981685   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.981698   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:44.981709   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:44.981724   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:45.034030   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:45.034063   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:45.047545   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:45.047577   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:45.113885   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:45.113905   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:45.113917   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:45.195865   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:45.195907   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:47.740466   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:47.752890   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:47.752958   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:47.786124   65605 cri.go:89] found id: ""
	I0723 15:23:47.786149   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.786157   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:47.786162   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:47.786211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:47.818051   65605 cri.go:89] found id: ""
	I0723 15:23:47.818073   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.818081   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:47.818086   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:47.818134   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:47.854144   65605 cri.go:89] found id: ""
	I0723 15:23:47.854168   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.854176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:47.854181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:47.854226   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:47.885781   65605 cri.go:89] found id: ""
	I0723 15:23:47.885809   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.885819   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:47.885826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:47.885888   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:47.917809   65605 cri.go:89] found id: ""
	I0723 15:23:47.917840   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.917850   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:47.917857   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:47.917921   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:47.950041   65605 cri.go:89] found id: ""
	I0723 15:23:47.950069   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.950078   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:47.950085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:47.950145   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:47.983108   65605 cri.go:89] found id: ""
	I0723 15:23:47.983143   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.983154   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:47.983163   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:47.983232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:48.014560   65605 cri.go:89] found id: ""
	I0723 15:23:48.014604   65605 logs.go:276] 0 containers: []
	W0723 15:23:48.014612   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:48.014621   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:48.014638   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:48.027469   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:48.027494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:48.097571   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:48.097601   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:48.097615   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:48.178586   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:48.178618   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:48.215769   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:48.215794   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:50.768087   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:50.781396   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:50.781467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:50.817297   65605 cri.go:89] found id: ""
	I0723 15:23:50.817327   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.817335   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:50.817341   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:50.817388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:50.850439   65605 cri.go:89] found id: ""
	I0723 15:23:50.850467   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.850476   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:50.850483   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:50.850552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:50.884601   65605 cri.go:89] found id: ""
	I0723 15:23:50.884630   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.884641   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:50.884649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:50.884714   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:50.918971   65605 cri.go:89] found id: ""
	I0723 15:23:50.918996   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.919004   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:50.919010   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:50.919072   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:50.951244   65605 cri.go:89] found id: ""
	I0723 15:23:50.951277   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.951284   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:50.951290   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:50.951360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:50.983289   65605 cri.go:89] found id: ""
	I0723 15:23:50.983326   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.983334   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:50.983339   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:50.983392   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:51.019584   65605 cri.go:89] found id: ""
	I0723 15:23:51.019614   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.019624   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:51.019631   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:51.019693   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:51.050981   65605 cri.go:89] found id: ""
	I0723 15:23:51.051005   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.051014   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:51.051023   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:51.051038   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:51.088826   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:51.088852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:51.141369   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:51.141401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:51.155419   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:51.155450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:51.222640   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:51.222662   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:51.222675   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:53.802706   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:53.815926   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:53.815985   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:53.847867   65605 cri.go:89] found id: ""
	I0723 15:23:53.847900   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.847913   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:53.847921   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:53.847981   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:53.881461   65605 cri.go:89] found id: ""
	I0723 15:23:53.881489   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.881499   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:53.881506   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:53.881569   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:53.921025   65605 cri.go:89] found id: ""
	I0723 15:23:53.921059   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.921070   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:53.921076   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:53.921135   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:53.955219   65605 cri.go:89] found id: ""
	I0723 15:23:53.955242   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.955250   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:53.955255   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:53.955318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:53.991874   65605 cri.go:89] found id: ""
	I0723 15:23:53.991905   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.991915   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:53.991922   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:53.991986   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:54.024702   65605 cri.go:89] found id: ""
	I0723 15:23:54.024735   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.024745   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:54.024752   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:54.024819   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:54.063778   65605 cri.go:89] found id: ""
	I0723 15:23:54.063801   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.063808   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:54.063813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:54.063861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:54.098194   65605 cri.go:89] found id: ""
	I0723 15:23:54.098222   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.098232   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:54.098244   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:54.098258   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:54.148576   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:54.148617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:54.162561   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:54.162596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:54.236614   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:54.236647   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:54.236663   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:54.315900   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:54.315932   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:56.853674   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:56.867190   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:56.867270   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:56.901757   65605 cri.go:89] found id: ""
	I0723 15:23:56.901782   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.901792   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:56.901799   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:56.901858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:56.943877   65605 cri.go:89] found id: ""
	I0723 15:23:56.943909   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.943920   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:56.943926   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:56.943983   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:56.977156   65605 cri.go:89] found id: ""
	I0723 15:23:56.977186   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.977194   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:56.977200   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:56.977260   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:57.009251   65605 cri.go:89] found id: ""
	I0723 15:23:57.009280   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.009290   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:57.009297   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:57.009362   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:57.041196   65605 cri.go:89] found id: ""
	I0723 15:23:57.041225   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.041236   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:57.041243   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:57.041295   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:57.081725   65605 cri.go:89] found id: ""
	I0723 15:23:57.081752   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.081760   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:57.081765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:57.081810   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:57.114457   65605 cri.go:89] found id: ""
	I0723 15:23:57.114482   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.114490   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:57.114495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:57.114551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:57.149775   65605 cri.go:89] found id: ""
	I0723 15:23:57.149803   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.149814   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:57.149824   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:57.149838   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:57.197984   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:57.198014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:57.210717   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:57.210743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:57.271374   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:57.271392   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:57.271403   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:57.346151   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:57.346185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:59.882368   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:59.895184   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:59.895257   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:59.928859   65605 cri.go:89] found id: ""
	I0723 15:23:59.928891   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.928902   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:59.928909   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:59.928967   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:59.962441   65605 cri.go:89] found id: ""
	I0723 15:23:59.962472   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.962483   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:59.962491   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:59.962570   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:59.996637   65605 cri.go:89] found id: ""
	I0723 15:23:59.996659   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.996667   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:59.996672   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:59.996720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:00.029291   65605 cri.go:89] found id: ""
	I0723 15:24:00.029320   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.029330   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:00.029338   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:00.029387   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:00.060869   65605 cri.go:89] found id: ""
	I0723 15:24:00.060898   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.060907   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:00.060912   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:00.060993   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:00.092010   65605 cri.go:89] found id: ""
	I0723 15:24:00.092042   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.092054   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:00.092063   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:00.092128   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:00.124914   65605 cri.go:89] found id: ""
	I0723 15:24:00.124940   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.124949   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:00.124955   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:00.125016   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:00.159927   65605 cri.go:89] found id: ""
	I0723 15:24:00.159953   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.159962   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:00.159977   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:00.159993   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:00.209719   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:00.209764   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:00.224757   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:00.224784   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:00.292079   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:00.292100   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:00.292113   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:00.377382   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:00.377415   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:02.916818   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:02.931524   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:02.931594   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:02.966440   65605 cri.go:89] found id: ""
	I0723 15:24:02.966462   65605 logs.go:276] 0 containers: []
	W0723 15:24:02.966470   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:02.966475   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:02.966525   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:03.000833   65605 cri.go:89] found id: ""
	I0723 15:24:03.000857   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.000865   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:03.000870   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:03.000918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:03.035531   65605 cri.go:89] found id: ""
	I0723 15:24:03.035559   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.035570   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:03.035577   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:03.035636   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:03.068376   65605 cri.go:89] found id: ""
	I0723 15:24:03.068401   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.068411   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:03.068418   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:03.068479   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:03.102499   65605 cri.go:89] found id: ""
	I0723 15:24:03.102532   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.102543   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:03.102549   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:03.102600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:03.137173   65605 cri.go:89] found id: ""
	I0723 15:24:03.137198   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.137207   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:03.137215   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:03.137259   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:03.170652   65605 cri.go:89] found id: ""
	I0723 15:24:03.170677   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.170685   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:03.170690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:03.170748   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:03.204828   65605 cri.go:89] found id: ""
	I0723 15:24:03.204855   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.204864   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:03.204875   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:03.204895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:03.287370   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:03.287413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:03.323855   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:03.323888   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:03.379809   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:03.379846   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:03.392944   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:03.392971   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:03.465681   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:05.966635   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:05.979888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:05.979949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:06.013706   65605 cri.go:89] found id: ""
	I0723 15:24:06.013733   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.013740   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:06.013746   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:06.013794   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:06.046584   65605 cri.go:89] found id: ""
	I0723 15:24:06.046612   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.046622   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:06.046630   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:06.046690   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:06.077379   65605 cri.go:89] found id: ""
	I0723 15:24:06.077407   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.077416   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:06.077422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:06.077488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:06.108946   65605 cri.go:89] found id: ""
	I0723 15:24:06.108975   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.108986   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:06.108993   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:06.109058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:06.143082   65605 cri.go:89] found id: ""
	I0723 15:24:06.143115   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.143123   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:06.143129   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:06.143178   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:06.182735   65605 cri.go:89] found id: ""
	I0723 15:24:06.182762   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.182772   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:06.182779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:06.182839   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:06.217613   65605 cri.go:89] found id: ""
	I0723 15:24:06.217640   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.217650   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:06.217657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:06.217720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:06.252739   65605 cri.go:89] found id: ""
	I0723 15:24:06.252775   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.252787   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:06.252800   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:06.252814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:06.304325   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:06.304358   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:06.317426   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:06.317450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:06.384284   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:06.384313   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:06.384329   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:06.460936   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:06.460974   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:09.000304   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:09.013544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:09.013618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:09.046414   65605 cri.go:89] found id: ""
	I0723 15:24:09.046442   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.046452   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:09.046459   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:09.046522   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:09.083183   65605 cri.go:89] found id: ""
	I0723 15:24:09.083214   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.083225   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:09.083231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:09.083292   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:09.117524   65605 cri.go:89] found id: ""
	I0723 15:24:09.117568   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.117578   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:09.117585   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:09.117647   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:09.152624   65605 cri.go:89] found id: ""
	I0723 15:24:09.152652   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.152667   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:09.152674   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:09.152735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:09.186918   65605 cri.go:89] found id: ""
	I0723 15:24:09.186943   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.186951   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:09.186957   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:09.187017   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:09.219857   65605 cri.go:89] found id: ""
	I0723 15:24:09.219889   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.219909   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:09.219917   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:09.219980   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:09.253364   65605 cri.go:89] found id: ""
	I0723 15:24:09.253392   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.253402   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:09.253409   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:09.253469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:09.285049   65605 cri.go:89] found id: ""
	I0723 15:24:09.285072   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.285079   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:09.285088   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:09.285099   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:09.336011   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:09.336046   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:09.349643   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:09.349672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:09.428156   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:09.428181   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:09.428200   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:09.513917   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:09.513977   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:12.053554   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:12.067177   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:12.067242   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:12.097265   65605 cri.go:89] found id: ""
	I0723 15:24:12.097289   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.097298   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:12.097305   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:12.097378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:12.129832   65605 cri.go:89] found id: ""
	I0723 15:24:12.129858   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.129868   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:12.129876   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:12.129938   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:12.164173   65605 cri.go:89] found id: ""
	I0723 15:24:12.164202   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.164213   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:12.164221   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:12.164275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:12.196604   65605 cri.go:89] found id: ""
	I0723 15:24:12.196637   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.196648   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:12.196655   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:12.196725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:12.239120   65605 cri.go:89] found id: ""
	I0723 15:24:12.239149   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.239158   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:12.239164   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:12.239232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:12.273806   65605 cri.go:89] found id: ""
	I0723 15:24:12.273836   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.273847   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:12.273855   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:12.273908   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:12.305937   65605 cri.go:89] found id: ""
	I0723 15:24:12.305965   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.305976   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:12.305984   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:12.306045   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:12.337795   65605 cri.go:89] found id: ""
	I0723 15:24:12.337822   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.337830   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:12.337839   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:12.337850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:12.390476   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:12.390512   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:12.405397   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:12.405422   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:12.474687   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:12.474711   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:12.474730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:12.551302   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:12.551341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:15.094530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:15.108194   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:15.108267   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:15.141068   65605 cri.go:89] found id: ""
	I0723 15:24:15.141095   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.141103   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:15.141109   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:15.141167   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:15.176226   65605 cri.go:89] found id: ""
	I0723 15:24:15.176260   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.176276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:15.176284   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:15.176348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:15.209086   65605 cri.go:89] found id: ""
	I0723 15:24:15.209115   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.209123   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:15.209128   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:15.209175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:15.245808   65605 cri.go:89] found id: ""
	I0723 15:24:15.245842   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.245853   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:15.245863   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:15.245926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:15.277680   65605 cri.go:89] found id: ""
	I0723 15:24:15.277710   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.277720   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:15.277728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:15.277789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:15.308419   65605 cri.go:89] found id: ""
	I0723 15:24:15.308443   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.308450   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:15.308456   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:15.308515   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:15.340785   65605 cri.go:89] found id: ""
	I0723 15:24:15.340812   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.340820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:15.340825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:15.340871   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:15.376014   65605 cri.go:89] found id: ""
	I0723 15:24:15.376040   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.376050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:15.376061   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:15.376074   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:15.427672   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:15.427706   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:15.441726   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:15.441755   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:15.508628   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:15.508659   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:15.508674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:15.589246   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:15.589284   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.128036   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:18.141529   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:18.141604   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:18.176401   65605 cri.go:89] found id: ""
	I0723 15:24:18.176434   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.176446   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:18.176453   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:18.176507   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:18.209833   65605 cri.go:89] found id: ""
	I0723 15:24:18.209868   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.209878   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:18.209886   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:18.209949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:18.243094   65605 cri.go:89] found id: ""
	I0723 15:24:18.243129   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.243139   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:18.243146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:18.243211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:18.275929   65605 cri.go:89] found id: ""
	I0723 15:24:18.275957   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.275968   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:18.275980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:18.276037   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:18.309064   65605 cri.go:89] found id: ""
	I0723 15:24:18.309095   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.309103   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:18.309109   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:18.309171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:18.345446   65605 cri.go:89] found id: ""
	I0723 15:24:18.345475   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.345485   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:18.345491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:18.345552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:18.381774   65605 cri.go:89] found id: ""
	I0723 15:24:18.381808   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.381820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:18.381827   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:18.381881   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:18.435663   65605 cri.go:89] found id: ""
	I0723 15:24:18.435692   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.435706   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:18.435716   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:18.435729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.471152   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:18.471184   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:18.523114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:18.523146   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:18.536555   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:18.536594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:18.607773   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:18.607792   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:18.607803   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.192781   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:21.205337   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:21.205403   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:21.242125   65605 cri.go:89] found id: ""
	I0723 15:24:21.242155   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.242163   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:21.242170   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:21.242243   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:21.279245   65605 cri.go:89] found id: ""
	I0723 15:24:21.279274   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.279286   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:21.279295   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:21.279361   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:21.311316   65605 cri.go:89] found id: ""
	I0723 15:24:21.311340   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.311348   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:21.311355   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:21.311415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:21.344444   65605 cri.go:89] found id: ""
	I0723 15:24:21.344468   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.344478   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:21.344485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:21.344545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:21.381055   65605 cri.go:89] found id: ""
	I0723 15:24:21.381082   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.381092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:21.381099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:21.381158   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:21.416593   65605 cri.go:89] found id: ""
	I0723 15:24:21.416621   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.416633   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:21.416643   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:21.416706   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:21.448345   65605 cri.go:89] found id: ""
	I0723 15:24:21.448368   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.448377   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:21.448382   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:21.448426   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:21.481810   65605 cri.go:89] found id: ""
	I0723 15:24:21.481836   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.481843   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:21.481852   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:21.481874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:21.545200   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:21.545227   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:21.545244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.626037   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:21.626073   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:21.667961   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:21.667998   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:21.718622   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:21.718662   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.233086   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:24.247111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:24.247175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:24.281818   65605 cri.go:89] found id: ""
	I0723 15:24:24.281850   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.281861   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:24.281868   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:24.281924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:24.315621   65605 cri.go:89] found id: ""
	I0723 15:24:24.315647   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.315656   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:24.315664   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:24.315722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:24.350355   65605 cri.go:89] found id: ""
	I0723 15:24:24.350400   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.350410   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:24.350417   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:24.350498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:24.384584   65605 cri.go:89] found id: ""
	I0723 15:24:24.384611   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.384619   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:24.384625   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:24.384671   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:24.423669   65605 cri.go:89] found id: ""
	I0723 15:24:24.423694   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.423701   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:24.423707   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:24.423754   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:24.456572   65605 cri.go:89] found id: ""
	I0723 15:24:24.456599   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.456606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:24.456611   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:24.456659   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:24.488024   65605 cri.go:89] found id: ""
	I0723 15:24:24.488047   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.488055   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:24.488061   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:24.488109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:24.519311   65605 cri.go:89] found id: ""
	I0723 15:24:24.519344   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.519352   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:24.519360   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:24.519371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:24.568552   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:24.568594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.581845   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:24.581874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:24.650455   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:24.650478   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:24.650492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:24.728143   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:24.728179   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.268112   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:27.281947   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:27.282025   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:27.315489   65605 cri.go:89] found id: ""
	I0723 15:24:27.315517   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.315528   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:27.315536   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:27.315599   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:27.348481   65605 cri.go:89] found id: ""
	I0723 15:24:27.348509   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.348519   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:27.348526   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:27.348580   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:27.380628   65605 cri.go:89] found id: ""
	I0723 15:24:27.380659   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.380668   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:27.380673   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:27.380731   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:27.413647   65605 cri.go:89] found id: ""
	I0723 15:24:27.413679   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.413688   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:27.413693   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:27.413744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:27.450398   65605 cri.go:89] found id: ""
	I0723 15:24:27.450425   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.450436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:27.450442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:27.450494   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:27.489071   65605 cri.go:89] found id: ""
	I0723 15:24:27.489101   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.489117   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:27.489125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:27.489190   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:27.529785   65605 cri.go:89] found id: ""
	I0723 15:24:27.529813   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.529823   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:27.529829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:27.529876   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:27.560811   65605 cri.go:89] found id: ""
	I0723 15:24:27.560843   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.560855   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:27.560866   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:27.560882   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:27.574078   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:27.574100   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:27.636153   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:27.636179   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:27.636194   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:27.714001   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:27.714041   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.751396   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:27.751428   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.307581   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:30.319762   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:30.319823   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:30.354317   65605 cri.go:89] found id: ""
	I0723 15:24:30.354341   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.354349   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:30.354355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:30.354429   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:30.389994   65605 cri.go:89] found id: ""
	I0723 15:24:30.390026   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.390039   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:30.390048   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:30.390122   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:30.428854   65605 cri.go:89] found id: ""
	I0723 15:24:30.428878   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.428887   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:30.428893   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:30.428966   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:30.461727   65605 cri.go:89] found id: ""
	I0723 15:24:30.461752   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.461759   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:30.461765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:30.461813   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:30.494777   65605 cri.go:89] found id: ""
	I0723 15:24:30.494799   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.494807   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:30.494813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:30.494858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:30.531918   65605 cri.go:89] found id: ""
	I0723 15:24:30.531943   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.531954   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:30.531960   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:30.532034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:30.590683   65605 cri.go:89] found id: ""
	I0723 15:24:30.590710   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.590720   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:30.590727   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:30.590772   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:30.636073   65605 cri.go:89] found id: ""
	I0723 15:24:30.636104   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.636114   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:30.636124   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:30.636138   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.686233   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:30.686268   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:30.700266   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:30.700308   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:30.773850   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:30.773868   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:30.773879   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:30.854428   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:30.854464   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:33.393374   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:33.406722   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:33.406779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:33.440555   65605 cri.go:89] found id: ""
	I0723 15:24:33.440585   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.440596   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:33.440604   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:33.440666   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:33.473363   65605 cri.go:89] found id: ""
	I0723 15:24:33.473389   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.473398   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:33.473405   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:33.473469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:33.509772   65605 cri.go:89] found id: ""
	I0723 15:24:33.509805   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.509816   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:33.509829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:33.509896   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:33.546578   65605 cri.go:89] found id: ""
	I0723 15:24:33.546605   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.546613   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:33.546618   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:33.546686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:33.582735   65605 cri.go:89] found id: ""
	I0723 15:24:33.582759   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.582766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:33.582771   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:33.582831   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:33.619013   65605 cri.go:89] found id: ""
	I0723 15:24:33.619039   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.619048   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:33.619053   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:33.619110   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:33.655967   65605 cri.go:89] found id: ""
	I0723 15:24:33.655988   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.655995   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:33.656001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:33.656058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:33.694266   65605 cri.go:89] found id: ""
	I0723 15:24:33.694303   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.694311   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:33.694319   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:33.694330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:33.744464   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:33.744504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:33.759314   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:33.759342   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:33.832308   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:33.832331   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:33.832364   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:33.910820   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:33.910860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.452804   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:36.465137   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:36.465224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:36.504340   65605 cri.go:89] found id: ""
	I0723 15:24:36.504371   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.504380   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:36.504385   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:36.504436   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:36.539113   65605 cri.go:89] found id: ""
	I0723 15:24:36.539138   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.539147   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:36.539154   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:36.539215   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:36.572443   65605 cri.go:89] found id: ""
	I0723 15:24:36.572468   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.572478   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:36.572485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:36.572540   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:36.605366   65605 cri.go:89] found id: ""
	I0723 15:24:36.605391   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.605398   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:36.605404   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:36.605467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:36.637467   65605 cri.go:89] found id: ""
	I0723 15:24:36.637496   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.637506   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:36.637513   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:36.637576   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:36.674630   65605 cri.go:89] found id: ""
	I0723 15:24:36.674652   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.674661   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:36.674669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:36.674722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:36.707409   65605 cri.go:89] found id: ""
	I0723 15:24:36.707500   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.707511   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:36.707525   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:36.707581   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:36.742746   65605 cri.go:89] found id: ""
	I0723 15:24:36.742771   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.742778   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:36.742786   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:36.742800   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.776474   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:36.776498   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:36.826256   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:36.826289   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:36.839568   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:36.839596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:36.906055   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:36.906082   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:36.906095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:39.483791   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:39.496085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:39.496150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:39.527545   65605 cri.go:89] found id: ""
	I0723 15:24:39.527573   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.527583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:39.527590   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:39.527653   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:39.562024   65605 cri.go:89] found id: ""
	I0723 15:24:39.562051   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.562060   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:39.562066   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:39.562115   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:39.600294   65605 cri.go:89] found id: ""
	I0723 15:24:39.600317   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.600324   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:39.600329   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:39.600378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:39.635629   65605 cri.go:89] found id: ""
	I0723 15:24:39.635653   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.635663   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:39.635669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:39.635729   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:39.672815   65605 cri.go:89] found id: ""
	I0723 15:24:39.672843   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.672854   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:39.672861   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:39.672924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:39.705965   65605 cri.go:89] found id: ""
	I0723 15:24:39.705999   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.706009   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:39.706023   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:39.706077   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:39.739262   65605 cri.go:89] found id: ""
	I0723 15:24:39.739288   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.739298   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:39.739304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:39.739373   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:39.771786   65605 cri.go:89] found id: ""
	I0723 15:24:39.771811   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.771820   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:39.771831   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:39.771844   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:39.813596   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:39.813628   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:39.861596   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:39.861629   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:39.875843   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:39.875867   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:39.947917   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:39.947941   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:39.947958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:42.530636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:42.543636   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:42.543718   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:42.576613   65605 cri.go:89] found id: ""
	I0723 15:24:42.576642   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.576652   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:42.576659   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:42.576723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:42.611422   65605 cri.go:89] found id: ""
	I0723 15:24:42.611452   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.611460   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:42.611465   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:42.611514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:42.647346   65605 cri.go:89] found id: ""
	I0723 15:24:42.647370   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.647380   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:42.647386   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:42.647447   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:42.683587   65605 cri.go:89] found id: ""
	I0723 15:24:42.683614   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.683622   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:42.683627   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:42.683673   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:42.715688   65605 cri.go:89] found id: ""
	I0723 15:24:42.715709   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.715717   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:42.715723   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:42.715775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:42.749589   65605 cri.go:89] found id: ""
	I0723 15:24:42.749624   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.749632   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:42.749637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:42.749684   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:42.786668   65605 cri.go:89] found id: ""
	I0723 15:24:42.786694   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.786702   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:42.786708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:42.786757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:42.821541   65605 cri.go:89] found id: ""
	I0723 15:24:42.821574   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.821585   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:42.821597   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:42.821612   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:42.873689   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:42.873720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:42.886689   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:42.886719   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:42.958057   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:42.958078   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:42.958093   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:43.042738   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:43.042771   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:45.580764   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:45.593331   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:45.593402   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:45.632356   65605 cri.go:89] found id: ""
	I0723 15:24:45.632386   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.632397   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:45.632404   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:45.632460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:45.674319   65605 cri.go:89] found id: ""
	I0723 15:24:45.674353   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.674363   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:45.674371   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:45.674450   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:45.718577   65605 cri.go:89] found id: ""
	I0723 15:24:45.718608   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.718616   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:45.718622   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:45.718686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:45.758866   65605 cri.go:89] found id: ""
	I0723 15:24:45.758894   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.758901   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:45.758907   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:45.758954   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:45.795098   65605 cri.go:89] found id: ""
	I0723 15:24:45.795124   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.795134   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:45.795148   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:45.795224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:45.832205   65605 cri.go:89] found id: ""
	I0723 15:24:45.832236   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.832257   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:45.832266   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:45.832348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:45.867679   65605 cri.go:89] found id: ""
	I0723 15:24:45.867713   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.867725   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:45.867733   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:45.867799   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:45.904960   65605 cri.go:89] found id: ""
	I0723 15:24:45.904999   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.905010   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:45.905022   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:45.905036   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:45.962373   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:45.962434   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:45.978670   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:45.978715   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:46.050765   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:46.050795   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:46.050811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:46.145347   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:46.145387   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:48.691420   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:48.704605   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:48.704662   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:48.736998   65605 cri.go:89] found id: ""
	I0723 15:24:48.737030   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.737040   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:48.737048   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:48.737116   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:48.770428   65605 cri.go:89] found id: ""
	I0723 15:24:48.770456   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.770466   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:48.770474   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:48.770534   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:48.804036   65605 cri.go:89] found id: ""
	I0723 15:24:48.804063   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.804073   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:48.804080   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:48.804140   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:48.841221   65605 cri.go:89] found id: ""
	I0723 15:24:48.841247   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.841256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:48.841263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:48.841345   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:48.877239   65605 cri.go:89] found id: ""
	I0723 15:24:48.877269   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.877280   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:48.877288   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:48.877348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:48.910120   65605 cri.go:89] found id: ""
	I0723 15:24:48.910144   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.910153   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:48.910161   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:48.910222   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:48.944831   65605 cri.go:89] found id: ""
	I0723 15:24:48.944861   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.944872   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:48.944881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:48.944936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:48.978782   65605 cri.go:89] found id: ""
	I0723 15:24:48.978811   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.978821   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:48.978832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:48.978850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:49.031863   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:49.031900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:49.045173   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:49.045196   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:49.115607   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:49.115632   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:49.115644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:49.195137   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:49.195186   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:51.732915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:51.746885   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:51.746970   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:51.787857   65605 cri.go:89] found id: ""
	I0723 15:24:51.787878   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.787885   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:51.787890   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:51.787933   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:51.826515   65605 cri.go:89] found id: ""
	I0723 15:24:51.826537   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.826545   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:51.826550   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:51.826611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:51.863825   65605 cri.go:89] found id: ""
	I0723 15:24:51.863867   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.863878   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:51.863884   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:51.863936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:51.901367   65605 cri.go:89] found id: ""
	I0723 15:24:51.901403   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.901414   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:51.901422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:51.901474   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:51.933270   65605 cri.go:89] found id: ""
	I0723 15:24:51.933303   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.933314   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:51.933321   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:51.933385   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:51.965174   65605 cri.go:89] found id: ""
	I0723 15:24:51.965205   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.965217   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:51.965227   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:51.965296   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:51.999785   65605 cri.go:89] found id: ""
	I0723 15:24:51.999812   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.999822   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:51.999841   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:51.999914   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:52.035592   65605 cri.go:89] found id: ""
	I0723 15:24:52.035619   65605 logs.go:276] 0 containers: []
	W0723 15:24:52.035630   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:52.035641   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:52.035656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:52.048683   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:52.048711   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:52.112319   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:52.112338   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:52.112351   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:52.196596   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:52.196632   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:52.235608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:52.235635   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:54.786414   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:54.799864   65605 kubeadm.go:597] duration metric: took 4m4.703331486s to restartPrimaryControlPlane
	W0723 15:24:54.799946   65605 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:54.799996   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:58.675405   65605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.875388525s)
	I0723 15:24:58.675461   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:24:58.689878   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:24:58.699568   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:24:58.708541   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:24:58.708559   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:24:58.708604   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:24:58.717055   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:24:58.717108   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:24:58.725736   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:24:58.734127   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:24:58.734227   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:24:58.742862   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.750696   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:24:58.750747   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.759235   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:24:58.768036   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:24:58.768094   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:24:58.777299   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:24:58.976177   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:26:54.925074   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:26:54.925180   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:26:54.926872   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:54.926940   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:54.927022   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:54.927137   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:54.927252   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:54.927339   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:54.929261   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:54.929337   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:54.929399   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:54.929472   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:54.929580   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:54.929678   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:54.929758   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:54.929836   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:54.929924   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:54.930026   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:54.930118   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:54.930165   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:54.930210   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:54.930257   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:54.930300   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:54.930371   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:54.930438   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:54.930535   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:54.930631   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:54.930663   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:54.930752   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:54.932218   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:54.932344   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:54.932445   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:54.932537   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:54.932653   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:54.932869   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:26:54.932943   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:26:54.933025   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933337   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933600   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933701   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933890   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933995   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934331   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934535   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934546   65605 kubeadm.go:310] 
	I0723 15:26:54.934600   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:26:54.934663   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:26:54.934673   65605 kubeadm.go:310] 
	I0723 15:26:54.934723   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:26:54.934762   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:26:54.934848   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:26:54.934855   65605 kubeadm.go:310] 
	I0723 15:26:54.934948   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:26:54.934979   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:26:54.935026   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:26:54.935034   65605 kubeadm.go:310] 
	I0723 15:26:54.935136   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:26:54.935255   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:26:54.935265   65605 kubeadm.go:310] 
	I0723 15:26:54.935410   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:26:54.935519   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:26:54.935578   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:26:54.935637   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:26:54.935693   65605 kubeadm.go:310] 
	W0723 15:26:54.935756   65605 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:26:54.935811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:26:55.388601   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:55.402519   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:26:55.412031   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:26:55.412054   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:26:55.412097   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:26:55.423092   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:26:55.423146   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:26:55.432321   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:26:55.441379   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:26:55.441447   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:26:55.450733   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.459263   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:26:55.459333   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.468488   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:26:55.477223   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:26:55.477277   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:26:55.485924   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:26:55.555024   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:55.555097   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:55.695658   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:55.695814   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:55.695939   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:55.867103   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:55.870203   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:55.870299   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:55.870407   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:55.870490   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:55.870568   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:55.870655   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:55.870733   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:55.870813   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:55.870861   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:55.870920   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:55.870985   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:55.871016   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:55.871063   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:55.963452   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:56.554450   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:57.109698   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:57.223533   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:57.243368   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:57.244331   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:57.244378   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:57.375340   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:57.377119   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:57.377234   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:57.386697   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:57.388552   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:57.389505   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:57.391792   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:27:37.394425   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:27:37.394534   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:37.394766   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:42.395393   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:42.395663   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:52.395847   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:52.396071   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:12.396192   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:12.396413   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395047   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:52.395369   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395384   65605 kubeadm.go:310] 
	I0723 15:28:52.395457   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:28:52.395531   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:28:52.395542   65605 kubeadm.go:310] 
	I0723 15:28:52.395588   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:28:52.395619   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:28:52.395780   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:28:52.395809   65605 kubeadm.go:310] 
	I0723 15:28:52.395964   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:28:52.396028   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:28:52.396084   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:28:52.396095   65605 kubeadm.go:310] 
	I0723 15:28:52.396194   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:28:52.396276   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:28:52.396286   65605 kubeadm.go:310] 
	I0723 15:28:52.396449   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:28:52.396552   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:28:52.396649   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:28:52.396744   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:28:52.396752   65605 kubeadm.go:310] 
	I0723 15:28:52.397220   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:28:52.397322   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:28:52.397397   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:28:52.397473   65605 kubeadm.go:394] duration metric: took 8m2.354906945s to StartCluster
	I0723 15:28:52.397516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:28:52.397573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:28:52.442298   65605 cri.go:89] found id: ""
	I0723 15:28:52.442328   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.442339   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:28:52.442347   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:28:52.442422   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:28:52.476108   65605 cri.go:89] found id: ""
	I0723 15:28:52.476131   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.476138   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:28:52.476144   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:28:52.476205   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:28:52.511118   65605 cri.go:89] found id: ""
	I0723 15:28:52.511143   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.511152   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:28:52.511159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:28:52.511224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:28:52.544901   65605 cri.go:89] found id: ""
	I0723 15:28:52.544934   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.544946   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:28:52.544954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:28:52.545020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:28:52.580472   65605 cri.go:89] found id: ""
	I0723 15:28:52.580494   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.580501   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:28:52.580515   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:28:52.580577   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:28:52.613777   65605 cri.go:89] found id: ""
	I0723 15:28:52.613808   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.613818   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:28:52.613826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:28:52.613894   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:28:52.650831   65605 cri.go:89] found id: ""
	I0723 15:28:52.650961   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.650974   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:28:52.650982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:28:52.651048   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:28:52.684805   65605 cri.go:89] found id: ""
	I0723 15:28:52.684833   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.684845   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:28:52.684857   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:28:52.684873   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:28:52.787532   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:28:52.787583   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:28:52.843947   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:28:52.843979   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:28:52.894679   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:28:52.894714   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:28:52.910794   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:28:52.910821   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:28:52.989285   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0723 15:28:52.989325   65605 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:28:52.989368   65605 out.go:239] * 
	* 
	W0723 15:28:52.989432   65605 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.989465   65605 out.go:239] * 
	* 
	W0723 15:28:52.990350   65605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:28:52.993770   65605 out.go:177] 
	W0723 15:28:52.995023   65605 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.995076   65605 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:28:52.995095   65605 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:28:52.996528   65605 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-000272 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (223.636332ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-000272 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-000272 logs -n 25: (1.622967018s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-193974                              | stopped-upgrade-193974       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-543029             | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-543029                                   | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-486436            | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272        | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-518198 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | disable-driver-mounts-518198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:18:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:18:41.988416   66641 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:18:41.988512   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988520   66641 out.go:304] Setting ErrFile to fd 2...
	I0723 15:18:41.988525   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988683   66641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:18:41.989181   66641 out.go:298] Setting JSON to false
	I0723 15:18:41.990049   66641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7268,"bootTime":1721740654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:18:41.990101   66641 start.go:139] virtualization: kvm guest
	I0723 15:18:41.992106   66641 out.go:177] * [default-k8s-diff-port-911217] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:18:41.993366   66641 notify.go:220] Checking for updates...
	I0723 15:18:41.993387   66641 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:18:41.994650   66641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:18:41.995849   66641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:18:41.997045   66641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:18:41.998236   66641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:18:41.999412   66641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:18:42.001155   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:18:42.001533   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.001596   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.016186   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0723 15:18:42.016616   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.017209   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.017230   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.017528   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.017699   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.017927   66641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:18:42.018205   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.018235   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.032467   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0723 15:18:42.032800   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.033214   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.033236   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.033544   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.033718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.065773   66641 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:18:42.067127   66641 start.go:297] selected driver: kvm2
	I0723 15:18:42.067142   66641 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.067236   66641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:18:42.067871   66641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.067939   66641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:18:42.083220   66641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:18:42.083563   66641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:18:42.083627   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:18:42.083641   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:18:42.083677   66641 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.083772   66641 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.085608   66641 out.go:177] * Starting "default-k8s-diff-port-911217" primary control-plane node in "default-k8s-diff-port-911217" cluster
	I0723 15:18:42.394642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:42.086917   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:18:42.086954   66641 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:18:42.086961   66641 cache.go:56] Caching tarball of preloaded images
	I0723 15:18:42.087024   66641 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:18:42.087034   66641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:18:42.087125   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:18:42.087294   66641 start.go:360] acquireMachinesLock for default-k8s-diff-port-911217: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:18:45.466731   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:51.546673   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:54.618775   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:00.698667   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:03.770734   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:09.850627   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:12.922681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:19.002679   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:22.074678   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:28.154680   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:31.226704   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:37.306625   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:40.378652   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:46.458657   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:49.530693   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:55.610642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:58.682681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:20:01.686613   65177 start.go:364] duration metric: took 4m13.413067096s to acquireMachinesLock for "embed-certs-486436"
	I0723 15:20:01.686692   65177 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:01.686702   65177 fix.go:54] fixHost starting: 
	I0723 15:20:01.687041   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:01.687070   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:01.702700   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0723 15:20:01.703107   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:01.703623   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:20:01.703649   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:01.704019   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:01.704222   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:01.704417   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:20:01.706547   65177 fix.go:112] recreateIfNeeded on embed-certs-486436: state=Stopped err=<nil>
	I0723 15:20:01.706583   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	W0723 15:20:01.706810   65177 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:01.708411   65177 out.go:177] * Restarting existing kvm2 VM for "embed-certs-486436" ...
	I0723 15:20:01.709393   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Start
	I0723 15:20:01.709559   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring networks are active...
	I0723 15:20:01.710353   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network default is active
	I0723 15:20:01.710733   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network mk-embed-certs-486436 is active
	I0723 15:20:01.711060   65177 main.go:141] libmachine: (embed-certs-486436) Getting domain xml...
	I0723 15:20:01.711832   65177 main.go:141] libmachine: (embed-certs-486436) Creating domain...
	I0723 15:20:02.915930   65177 main.go:141] libmachine: (embed-certs-486436) Waiting to get IP...
	I0723 15:20:02.916770   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:02.917115   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:02.917188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:02.917097   66959 retry.go:31] will retry after 245.483954ms: waiting for machine to come up
	I0723 15:20:01.683920   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:01.683992   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684333   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:20:01.684360   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684537   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:20:01.686489   64842 machine.go:97] duration metric: took 4m34.539799868s to provisionDockerMachine
	I0723 15:20:01.686530   64842 fix.go:56] duration metric: took 4m34.563243323s for fixHost
	I0723 15:20:01.686547   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 4m34.563294357s
	W0723 15:20:01.686572   64842 start.go:714] error starting host: provision: host is not running
	W0723 15:20:01.686657   64842 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0723 15:20:01.686668   64842 start.go:729] Will try again in 5 seconds ...
	I0723 15:20:03.164587   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.165021   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.165067   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.164972   66959 retry.go:31] will retry after 387.950176ms: waiting for machine to come up
	I0723 15:20:03.554705   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.555161   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.555188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.555103   66959 retry.go:31] will retry after 404.807138ms: waiting for machine to come up
	I0723 15:20:03.961830   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.962290   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.962323   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.962236   66959 retry.go:31] will retry after 570.61318ms: waiting for machine to come up
	I0723 15:20:04.534152   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:04.534702   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:04.534731   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:04.534650   66959 retry.go:31] will retry after 542.857217ms: waiting for machine to come up
	I0723 15:20:05.079445   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.079866   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.079894   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.079811   66959 retry.go:31] will retry after 653.88428ms: waiting for machine to come up
	I0723 15:20:05.735919   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.736350   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.736381   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.736331   66959 retry.go:31] will retry after 871.798617ms: waiting for machine to come up
	I0723 15:20:06.609428   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:06.609885   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:06.609908   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:06.609854   66959 retry.go:31] will retry after 1.079464189s: waiting for machine to come up
	I0723 15:20:07.690706   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:07.691096   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:07.691122   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:07.691070   66959 retry.go:31] will retry after 1.414145571s: waiting for machine to come up
	I0723 15:20:06.687299   64842 start.go:360] acquireMachinesLock for no-preload-543029: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:20:09.107698   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:09.108062   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:09.108091   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:09.108012   66959 retry.go:31] will retry after 2.263313118s: waiting for machine to come up
	I0723 15:20:11.374573   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:11.375009   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:11.375035   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:11.374970   66959 retry.go:31] will retry after 2.600297505s: waiting for machine to come up
	I0723 15:20:13.978265   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:13.978707   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:13.978733   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:13.978653   66959 retry.go:31] will retry after 2.515380756s: waiting for machine to come up
	I0723 15:20:16.497458   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:16.497913   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:16.497945   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:16.497872   66959 retry.go:31] will retry after 3.863044954s: waiting for machine to come up
	I0723 15:20:21.587107   65605 start.go:364] duration metric: took 3m54.633068774s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:20:21.587168   65605 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:21.587179   65605 fix.go:54] fixHost starting: 
	I0723 15:20:21.587596   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:21.587632   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:21.608083   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0723 15:20:21.608563   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:21.609109   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:20:21.609148   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:21.609463   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:21.609679   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:21.609839   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:20:21.611555   65605 fix.go:112] recreateIfNeeded on old-k8s-version-000272: state=Stopped err=<nil>
	I0723 15:20:21.611590   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	W0723 15:20:21.611766   65605 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:21.614168   65605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	I0723 15:20:21.615607   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .Start
	I0723 15:20:21.615831   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:20:21.616640   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:20:21.617122   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:20:21.617591   65605 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:20:21.618346   65605 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:20:20.365141   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365653   65177 main.go:141] libmachine: (embed-certs-486436) Found IP for machine: 192.168.39.200
	I0723 15:20:20.365671   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365677   65177 main.go:141] libmachine: (embed-certs-486436) Reserving static IP address...
	I0723 15:20:20.366319   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.366340   65177 main.go:141] libmachine: (embed-certs-486436) DBG | skip adding static IP to network mk-embed-certs-486436 - found existing host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"}
	I0723 15:20:20.366351   65177 main.go:141] libmachine: (embed-certs-486436) Reserved static IP address: 192.168.39.200
	I0723 15:20:20.366360   65177 main.go:141] libmachine: (embed-certs-486436) Waiting for SSH to be available...
	I0723 15:20:20.366367   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Getting to WaitForSSH function...
	I0723 15:20:20.368870   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369217   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.369239   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369431   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH client type: external
	I0723 15:20:20.369462   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa (-rw-------)
	I0723 15:20:20.369485   65177 main.go:141] libmachine: (embed-certs-486436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:20.369495   65177 main.go:141] libmachine: (embed-certs-486436) DBG | About to run SSH command:
	I0723 15:20:20.369505   65177 main.go:141] libmachine: (embed-certs-486436) DBG | exit 0
	I0723 15:20:20.494158   65177 main.go:141] libmachine: (embed-certs-486436) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:20.494591   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetConfigRaw
	I0723 15:20:20.495255   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.497821   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498094   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.498124   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498346   65177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/config.json ...
	I0723 15:20:20.498558   65177 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:20.498577   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:20.498808   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.500819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501138   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.501166   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.501481   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501643   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501770   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.501926   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.502215   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.502231   65177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:20.606234   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:20.606264   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606556   65177 buildroot.go:166] provisioning hostname "embed-certs-486436"
	I0723 15:20:20.606598   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606793   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.609446   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609801   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.609838   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609990   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.610137   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610468   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.610650   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.610813   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.610825   65177 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-486436 && echo "embed-certs-486436" | sudo tee /etc/hostname
	I0723 15:20:20.727215   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-486436
	
	I0723 15:20:20.727239   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.730058   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730363   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.730411   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730552   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.730741   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.730911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.731048   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.731204   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.731364   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.731380   65177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-486436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-486436/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-486436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:20.844079   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:20.844109   65177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:20.844128   65177 buildroot.go:174] setting up certificates
	I0723 15:20:20.844135   65177 provision.go:84] configureAuth start
	I0723 15:20:20.844145   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.844400   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.846867   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847192   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.847220   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847342   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.849457   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849786   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.849829   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849937   65177 provision.go:143] copyHostCerts
	I0723 15:20:20.849992   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:20.850002   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:20.850068   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:20.850164   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:20.850172   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:20.850201   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:20.850263   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:20.850272   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:20.850293   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:20.850358   65177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.embed-certs-486436 san=[127.0.0.1 192.168.39.200 embed-certs-486436 localhost minikube]
	I0723 15:20:20.945454   65177 provision.go:177] copyRemoteCerts
	I0723 15:20:20.945511   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:20.945536   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.948316   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948605   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.948639   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948797   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.948981   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.949142   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.949267   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.032367   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:20:21.054529   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:21.076049   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 15:20:21.098274   65177 provision.go:87] duration metric: took 254.126202ms to configureAuth
	I0723 15:20:21.098303   65177 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:21.098510   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:20:21.098600   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.100971   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101307   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.101341   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101520   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.101687   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.101828   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.102031   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.102187   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.102375   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.102418   65177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:21.359179   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:21.359214   65177 machine.go:97] duration metric: took 860.640697ms to provisionDockerMachine
	I0723 15:20:21.359230   65177 start.go:293] postStartSetup for "embed-certs-486436" (driver="kvm2")
	I0723 15:20:21.359244   65177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:21.359265   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.359777   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:21.359804   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.362611   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.362936   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.362963   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.363138   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.363311   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.363497   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.363669   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.444572   65177 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:21.448633   65177 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:21.448662   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:21.448733   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:21.448817   65177 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:21.448925   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:21.457699   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:21.480387   65177 start.go:296] duration metric: took 121.140622ms for postStartSetup
	I0723 15:20:21.480431   65177 fix.go:56] duration metric: took 19.793728867s for fixHost
	I0723 15:20:21.480449   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.483324   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483667   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.483690   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483854   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.484057   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484211   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484353   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.484516   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.484692   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.484703   65177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:21.586960   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748021.563549452
	
	I0723 15:20:21.586982   65177 fix.go:216] guest clock: 1721748021.563549452
	I0723 15:20:21.586989   65177 fix.go:229] Guest: 2024-07-23 15:20:21.563549452 +0000 UTC Remote: 2024-07-23 15:20:21.480435025 +0000 UTC m=+273.351160165 (delta=83.114427ms)
	I0723 15:20:21.587010   65177 fix.go:200] guest clock delta is within tolerance: 83.114427ms
	I0723 15:20:21.587016   65177 start.go:83] releasing machines lock for "embed-certs-486436", held for 19.900344761s
	I0723 15:20:21.587045   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.587363   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:21.590600   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.590998   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.591041   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.591194   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591723   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591965   65177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:21.592024   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.592172   65177 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:21.592190   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.594877   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595266   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595337   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595387   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595502   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595698   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.595751   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595776   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595837   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.595909   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595998   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.596083   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.596218   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.596369   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.709871   65177 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:21.717210   65177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:21.866461   65177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:21.871904   65177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:21.871979   65177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:21.888197   65177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:21.888226   65177 start.go:495] detecting cgroup driver to use...
	I0723 15:20:21.888339   65177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:21.903857   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:21.917841   65177 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:21.917917   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:21.935814   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:21.949898   65177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:22.066137   65177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:22.208517   65177 docker.go:233] disabling docker service ...
	I0723 15:20:22.208606   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:22.222583   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:22.235322   65177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:22.380324   65177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:22.513404   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:22.529676   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:22.546980   65177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:20:22.547050   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.556656   65177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:22.556723   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.566410   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.576269   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.586125   65177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:22.597824   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.608136   65177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.628391   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.642862   65177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:22.652564   65177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:22.652625   65177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:22.667485   65177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:22.677669   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:22.809762   65177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:22.947870   65177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:22.947955   65177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:22.952570   65177 start.go:563] Will wait 60s for crictl version
	I0723 15:20:22.952672   65177 ssh_runner.go:195] Run: which crictl
	I0723 15:20:22.956658   65177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:22.997591   65177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:22.997719   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.030830   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.060406   65177 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:20:23.061617   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:23.065154   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065547   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:23.065572   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065845   65177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:23.070019   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:23.082226   65177 kubeadm.go:883] updating cluster {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:23.082414   65177 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:20:23.082490   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:23.117427   65177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:20:23.117505   65177 ssh_runner.go:195] Run: which lz4
	I0723 15:20:23.121380   65177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:23.125694   65177 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:23.125721   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:20:22.904910   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:20:22.905969   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:22.906448   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:22.906508   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:22.906424   67094 retry.go:31] will retry after 215.638875ms: waiting for machine to come up
	I0723 15:20:23.124008   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.124474   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.124510   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.124440   67094 retry.go:31] will retry after 380.753429ms: waiting for machine to come up
	I0723 15:20:23.507362   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.507777   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.507803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.507744   67094 retry.go:31] will retry after 385.253161ms: waiting for machine to come up
	I0723 15:20:23.894227   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.894675   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.894697   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.894627   67094 retry.go:31] will retry after 533.715559ms: waiting for machine to come up
	I0723 15:20:24.429811   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:24.430290   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:24.430321   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:24.430242   67094 retry.go:31] will retry after 637.033889ms: waiting for machine to come up
	I0723 15:20:25.068770   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.069313   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.069345   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.069274   67094 retry.go:31] will retry after 796.484567ms: waiting for machine to come up
	I0723 15:20:25.867223   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.867663   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.867693   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.867604   67094 retry.go:31] will retry after 845.920319ms: waiting for machine to come up
	I0723 15:20:26.715077   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:26.715612   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:26.715643   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:26.715566   67094 retry.go:31] will retry after 1.265268276s: waiting for machine to come up
	I0723 15:20:24.399306   65177 crio.go:462] duration metric: took 1.277970642s to copy over tarball
	I0723 15:20:24.399409   65177 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:26.603797   65177 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204354868s)
	I0723 15:20:26.603830   65177 crio.go:469] duration metric: took 2.204493799s to extract the tarball
	I0723 15:20:26.603839   65177 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:26.641498   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:26.682771   65177 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:20:26.682793   65177 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:20:26.682802   65177 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0723 15:20:26.682948   65177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-486436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:26.683021   65177 ssh_runner.go:195] Run: crio config
	I0723 15:20:26.734908   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:26.734934   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:26.734947   65177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:26.734979   65177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-486436 NodeName:embed-certs-486436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:20:26.735162   65177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-486436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:26.735247   65177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:20:26.746266   65177 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:26.746334   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:26.756387   65177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0723 15:20:26.771870   65177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:26.789639   65177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0723 15:20:26.807608   65177 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:26.811134   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:26.823851   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:26.952899   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:26.969453   65177 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436 for IP: 192.168.39.200
	I0723 15:20:26.969484   65177 certs.go:194] generating shared ca certs ...
	I0723 15:20:26.969503   65177 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:26.969694   65177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:26.969757   65177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:26.969770   65177 certs.go:256] generating profile certs ...
	I0723 15:20:26.969897   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/client.key
	I0723 15:20:26.969978   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key.8481dffb
	I0723 15:20:26.970038   65177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key
	I0723 15:20:26.970164   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:26.970203   65177 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:26.970216   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:26.970255   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:26.970279   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:26.970309   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:26.970369   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:26.971269   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:27.026302   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:27.075563   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:27.109194   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:27.136748   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0723 15:20:27.159391   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:20:27.181933   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:27.203549   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:27.225473   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:27.254497   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:27.275874   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:27.299275   65177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:27.316223   65177 ssh_runner.go:195] Run: openssl version
	I0723 15:20:27.322037   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:27.333546   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337890   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337945   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.343624   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:27.354738   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:27.365915   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370038   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370101   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.375514   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:27.386502   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:27.396611   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400879   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400978   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.406132   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:27.415738   65177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:27.419755   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:27.424982   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:27.430277   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:27.435794   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:27.441244   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:27.446515   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:27.451968   65177 kubeadm.go:392] StartCluster: {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:27.452053   65177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:27.452102   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.488671   65177 cri.go:89] found id: ""
	I0723 15:20:27.488758   65177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:27.498621   65177 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:27.498639   65177 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:27.498690   65177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:27.510485   65177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:27.511796   65177 kubeconfig.go:125] found "embed-certs-486436" server: "https://192.168.39.200:8443"
	I0723 15:20:27.513749   65177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:27.525206   65177 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I0723 15:20:27.525258   65177 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:27.525275   65177 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:27.525354   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.563337   65177 cri.go:89] found id: ""
	I0723 15:20:27.563411   65177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:27.583886   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:27.595493   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:27.595513   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:27.595591   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:27.606537   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:27.606596   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:27.616130   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:27.624277   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:27.624335   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:27.632787   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.641057   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:27.641113   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.649516   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:27.657977   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:27.658021   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:27.666489   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:27.675023   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.777750   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.982818   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:27.983136   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:27.983157   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:27.983112   67094 retry.go:31] will retry after 1.681215174s: waiting for machine to come up
	I0723 15:20:29.667369   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:29.667816   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:29.667846   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:29.667773   67094 retry.go:31] will retry after 1.742302977s: waiting for machine to come up
	I0723 15:20:31.412567   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:31.413046   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:31.413074   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:31.412990   67094 retry.go:31] will retry after 2.618033682s: waiting for machine to come up
	I0723 15:20:28.659756   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.867793   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.952107   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:29.020498   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:29.020632   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:29.521001   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.021488   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.520765   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.021749   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.521145   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.535745   65177 api_server.go:72] duration metric: took 2.515246955s to wait for apiserver process to appear ...
	I0723 15:20:31.535779   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:20:31.535802   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.561351   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.561400   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:33.561416   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.580699   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.580735   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:34.036231   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.045563   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.045603   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:34.536119   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.549417   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.549447   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:35.035956   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:35.040331   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:20:35.046883   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:20:35.046909   65177 api_server.go:131] duration metric: took 3.511123729s to wait for apiserver health ...
	I0723 15:20:35.046918   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:35.046924   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:35.048858   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:20:34.034295   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:34.034660   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:34.034682   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:34.034634   67094 retry.go:31] will retry after 2.832404848s: waiting for machine to come up
	I0723 15:20:35.050411   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:20:35.061924   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:20:35.088990   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:20:35.102746   65177 system_pods.go:59] 8 kube-system pods found
	I0723 15:20:35.102778   65177 system_pods.go:61] "coredns-7db6d8ff4d-v842j" [f3509de1-edf7-46c4-af5b-89338770d2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:20:35.102786   65177 system_pods.go:61] "etcd-embed-certs-486436" [46b72abd-c16d-452d-8c17-909fd2a25fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:20:35.102796   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [2ce2344f-5ddc-438b-8f16-338bc266da83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:20:35.102804   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [3f483328-583f-4c71-8372-db418f593b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:20:35.102812   65177 system_pods.go:61] "kube-proxy-f4vfh" [00e430df-ccc5-463d-96f9-288e2e611e2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:20:35.102822   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [0c581c3d-78ab-47d8-81a8-9d176192a94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:20:35.102829   65177 system_pods.go:61] "metrics-server-569cc877fc-rq67z" [b6371591-2fac-47f5-b20b-635c9f0755c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:20:35.102840   65177 system_pods.go:61] "storage-provisioner" [a0545674-2bfc-48b4-940e-cdedf02c5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:20:35.102849   65177 system_pods.go:74] duration metric: took 13.834305ms to wait for pod list to return data ...
	I0723 15:20:35.102857   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:20:35.106953   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:20:35.106977   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:20:35.106991   65177 node_conditions.go:105] duration metric: took 4.127613ms to run NodePressure ...
	I0723 15:20:35.107010   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:35.395355   65177 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399496   65177 kubeadm.go:739] kubelet initialised
	I0723 15:20:35.399514   65177 kubeadm.go:740] duration metric: took 4.133847ms waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399521   65177 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:20:35.404293   65177 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.408404   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408423   65177 pod_ready.go:81] duration metric: took 4.111276ms for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.408431   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408440   65177 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.412361   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412379   65177 pod_ready.go:81] duration metric: took 3.929729ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.412391   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412403   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.416588   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416603   65177 pod_ready.go:81] duration metric: took 4.193735ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.416610   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416616   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.492691   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492715   65177 pod_ready.go:81] duration metric: took 76.092496ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.492724   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492731   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892820   65177 pod_ready.go:92] pod "kube-proxy-f4vfh" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:35.892843   65177 pod_ready.go:81] duration metric: took 400.103193ms for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892853   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:37.898159   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:36.869147   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:36.869555   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:36.869593   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:36.869499   67094 retry.go:31] will retry after 4.334096738s: waiting for machine to come up
	I0723 15:20:41.208992   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209340   65605 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:20:41.209364   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:20:41.209382   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209808   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.209843   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | skip adding static IP to network mk-old-k8s-version-000272 - found existing host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"}
	I0723 15:20:41.209862   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:20:41.209878   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:20:41.209916   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:20:41.211671   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.211918   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.211956   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.212110   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:20:41.212139   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:20:41.212191   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:41.212211   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:20:41.212229   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:20:41.334852   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:41.335260   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:20:41.335965   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.338425   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.338803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.338842   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.339024   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:20:41.339218   65605 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:41.339235   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:41.339476   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.341528   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.341881   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.341909   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.342008   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.342192   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342352   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342502   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.342674   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.342855   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.342865   65605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:41.442564   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:41.442592   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.442857   65605 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:20:41.442872   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.443076   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.445976   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446389   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.446429   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446553   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.446719   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.446972   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.447096   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.447249   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.447418   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.447434   65605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:20:41.559708   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:20:41.559739   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.562630   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.562954   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.562977   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.563156   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.563340   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563501   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563596   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.563779   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.563977   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.564006   65605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:41.671327   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:41.671363   65605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:41.671396   65605 buildroot.go:174] setting up certificates
	I0723 15:20:41.671407   65605 provision.go:84] configureAuth start
	I0723 15:20:41.671418   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.671766   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.674340   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.674812   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.674848   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.675019   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.677052   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677386   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.677418   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677568   65605 provision.go:143] copyHostCerts
	I0723 15:20:41.677636   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:41.677651   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:41.677715   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:41.677826   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:41.677836   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:41.677866   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:41.677939   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:41.677949   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:41.677975   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:41.678039   65605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:20:41.745999   65605 provision.go:177] copyRemoteCerts
	I0723 15:20:41.746077   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:41.746123   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.748908   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749226   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.749252   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749417   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.749616   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.749771   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.749903   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:41.828867   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:42.386874   66641 start.go:364] duration metric: took 2m0.299552173s to acquireMachinesLock for "default-k8s-diff-port-911217"
	I0723 15:20:42.386943   66641 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:42.386951   66641 fix.go:54] fixHost starting: 
	I0723 15:20:42.387316   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:42.387356   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:42.405492   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0723 15:20:42.405947   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:42.406493   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:20:42.406517   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:42.406843   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:42.407031   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:20:42.407169   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:20:42.408621   66641 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911217: state=Stopped err=<nil>
	I0723 15:20:42.408657   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	W0723 15:20:42.408798   66641 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:42.410540   66641 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-911217" ...
	I0723 15:20:39.899515   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.903102   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.852296   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:20:41.874579   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:20:41.897065   65605 provision.go:87] duration metric: took 225.644058ms to configureAuth
	I0723 15:20:41.897095   65605 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:41.897287   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:20:41.897354   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.900232   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902335   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.902328   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.902412   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902623   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.902826   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.903015   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.903209   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.903388   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.903407   65605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:42.162998   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:42.163019   65605 machine.go:97] duration metric: took 823.789368ms to provisionDockerMachine
	I0723 15:20:42.163030   65605 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:20:42.163040   65605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:42.163054   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.163444   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:42.163471   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.166193   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.166628   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.166842   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.167037   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.167181   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.248364   65605 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:42.252403   65605 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:42.252433   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:42.252504   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:42.252596   65605 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:42.252693   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:42.262571   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:42.285115   65605 start.go:296] duration metric: took 122.072017ms for postStartSetup
	I0723 15:20:42.285160   65605 fix.go:56] duration metric: took 20.697977265s for fixHost
	I0723 15:20:42.285180   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.287760   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288032   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.288062   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288187   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.288428   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288606   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288799   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.289000   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:42.289216   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:42.289232   65605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:42.386682   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748042.363547028
	
	I0723 15:20:42.386711   65605 fix.go:216] guest clock: 1721748042.363547028
	I0723 15:20:42.386723   65605 fix.go:229] Guest: 2024-07-23 15:20:42.363547028 +0000 UTC Remote: 2024-07-23 15:20:42.285164316 +0000 UTC m=+255.470399434 (delta=78.382712ms)
	I0723 15:20:42.386754   65605 fix.go:200] guest clock delta is within tolerance: 78.382712ms
	I0723 15:20:42.386765   65605 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 20.799620907s
	I0723 15:20:42.386796   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.387067   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:42.390116   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390543   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.390589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390703   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391215   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391395   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391482   65605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:42.391527   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.391645   65605 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:42.391670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.394373   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394732   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.394757   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394924   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395081   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395245   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.395286   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.395331   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.395428   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.395579   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395726   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395963   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.396145   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.499940   65605 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:42.505917   65605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:42.646731   65605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:42.652550   65605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:42.652612   65605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:42.667337   65605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:42.667357   65605 start.go:495] detecting cgroup driver to use...
	I0723 15:20:42.667419   65605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:42.681839   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:42.694833   65605 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:42.694888   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:42.707800   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:42.720914   65605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:42.844082   65605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:43.024993   65605 docker.go:233] disabling docker service ...
	I0723 15:20:43.025076   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:43.057263   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:43.070881   65605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:43.180616   65605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:43.295769   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:43.311341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:43.333719   65605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:20:43.333787   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.345261   65605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:43.345364   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.356669   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.366947   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.378177   65605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:43.390672   65605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:43.400591   65605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:43.400645   65605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:43.413974   65605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:43.423528   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:43.545030   65605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:43.685902   65605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:43.686018   65605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:43.691692   65605 start.go:563] Will wait 60s for crictl version
	I0723 15:20:43.691742   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:43.695470   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:43.733229   65605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:43.733329   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.765591   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.794762   65605 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:20:43.796073   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:43.799075   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799549   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:43.799585   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799780   65605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:43.803604   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:43.818919   65605 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:43.819019   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:20:43.819073   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:43.872208   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:43.872268   65605 ssh_runner.go:195] Run: which lz4
	I0723 15:20:43.876273   65605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:43.880532   65605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:43.880566   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:20:45.299916   65605 crio.go:462] duration metric: took 1.423681931s to copy over tarball
	I0723 15:20:45.299989   65605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:42.411787   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Start
	I0723 15:20:42.411942   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring networks are active...
	I0723 15:20:42.412743   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network default is active
	I0723 15:20:42.413086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network mk-default-k8s-diff-port-911217 is active
	I0723 15:20:42.413500   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Getting domain xml...
	I0723 15:20:42.414312   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Creating domain...
	I0723 15:20:43.688063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting to get IP...
	I0723 15:20:43.689007   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689403   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689503   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.689396   67258 retry.go:31] will retry after 291.635723ms: waiting for machine to come up
	I0723 15:20:43.982895   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983344   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.983269   67258 retry.go:31] will retry after 315.035251ms: waiting for machine to come up
	I0723 15:20:44.300029   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300502   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300544   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.300453   67258 retry.go:31] will retry after 314.08729ms: waiting for machine to come up
	I0723 15:20:44.615873   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616274   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616299   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.616221   67258 retry.go:31] will retry after 424.738509ms: waiting for machine to come up
	I0723 15:20:45.042987   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043464   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043522   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.043438   67258 retry.go:31] will retry after 711.273362ms: waiting for machine to come up
	I0723 15:20:45.755790   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756332   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756366   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.756261   67258 retry.go:31] will retry after 880.333826ms: waiting for machine to come up
	I0723 15:20:46.638270   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638815   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638859   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:46.638766   67258 retry.go:31] will retry after 733.311982ms: waiting for machine to come up
	I0723 15:20:43.398761   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:43.398790   65177 pod_ready.go:81] duration metric: took 7.505930182s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:43.398803   65177 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:45.406572   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:47.406841   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:48.176598   65605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87658172s)
	I0723 15:20:48.176623   65605 crio.go:469] duration metric: took 2.876682557s to extract the tarball
	I0723 15:20:48.176632   65605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:48.221431   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:48.256729   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:48.256750   65605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:20:48.256833   65605 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.256883   65605 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.256906   65605 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.256840   65605 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.256896   65605 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.256841   65605 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.256851   65605 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:20:48.256858   65605 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258836   65605 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.258855   65605 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.258867   65605 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:20:48.258913   65605 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.258840   65605 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258841   65605 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.258842   65605 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.258906   65605 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.548121   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.552098   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.552418   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.560834   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.580417   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:20:48.590031   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.619770   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.633302   65605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:20:48.633365   65605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.633414   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.660305   65605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:20:48.660383   65605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.660439   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.691792   65605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:20:48.691853   65605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.691902   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707832   65605 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:20:48.707867   65605 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:20:48.707901   65605 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:20:48.707917   65605 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.707945   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707957   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.722912   65605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:20:48.722960   65605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.723012   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729754   65605 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:20:48.729792   65605 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.729820   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.729874   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.729826   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729827   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.730025   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:20:48.730037   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.730113   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.848335   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:20:48.849228   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.849310   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:20:48.858540   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:20:48.858650   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:20:48.858711   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:20:48.858750   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:20:48.889577   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:20:49.134808   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:49.273570   65605 cache_images.go:92] duration metric: took 1.016803126s to LoadCachedImages
	W0723 15:20:49.273670   65605 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0723 15:20:49.273686   65605 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:20:49.273808   65605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:49.273902   65605 ssh_runner.go:195] Run: crio config
	I0723 15:20:49.321968   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:20:49.321995   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:49.322007   65605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:49.322028   65605 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:20:49.322208   65605 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:49.322292   65605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:20:49.332563   65605 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:49.332636   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:49.345174   65605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:20:49.364369   65605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:49.379807   65605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:20:49.396643   65605 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:49.400437   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:49.412291   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:49.539360   65605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:49.556165   65605 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:20:49.556198   65605 certs.go:194] generating shared ca certs ...
	I0723 15:20:49.556218   65605 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:49.556393   65605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:49.556448   65605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:49.556457   65605 certs.go:256] generating profile certs ...
	I0723 15:20:49.556574   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:20:49.556652   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:20:49.556699   65605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:20:49.556845   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:49.556900   65605 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:49.556913   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:49.556947   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:49.557001   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:49.557036   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:49.557087   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:49.557993   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:49.605662   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:49.639122   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:49.665264   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:49.691008   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:20:49.723820   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:20:49.750608   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:49.776942   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:49.809923   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:49.834935   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:49.857389   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:49.880619   65605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:49.897369   65605 ssh_runner.go:195] Run: openssl version
	I0723 15:20:49.902878   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:49.913861   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918296   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918359   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.924159   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:49.936081   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:49.947674   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952040   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952090   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.957714   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:49.969333   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:49.981037   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985257   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985303   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.991083   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:50.002977   65605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:50.007497   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:50.013359   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:50.019202   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:50.025182   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:50.030979   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:50.036818   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:50.042573   65605 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:50.042687   65605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:50.042734   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.084635   65605 cri.go:89] found id: ""
	I0723 15:20:50.084714   65605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:50.096501   65605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:50.096521   65605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:50.096585   65605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:50.107443   65605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:50.108742   65605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:20:50.109665   65605 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-000272" cluster setting kubeconfig missing "old-k8s-version-000272" context setting]
	I0723 15:20:50.111089   65605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:50.178975   65605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:50.190920   65605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0723 15:20:50.190961   65605 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:50.190972   65605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:50.191033   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.230879   65605 cri.go:89] found id: ""
	I0723 15:20:50.230972   65605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:50.247994   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:50.257490   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:50.257518   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:50.257576   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:50.266704   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:50.266763   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:50.276276   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:50.285533   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:50.285613   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:50.294642   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.303358   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:50.303414   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.313060   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:50.322294   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:50.322364   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:50.331659   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:50.341120   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:50.460900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.327126   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.576244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.662730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.762087   65605 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:51.762179   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:47.373536   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374064   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374096   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:47.373991   67258 retry.go:31] will retry after 1.176593909s: waiting for machine to come up
	I0723 15:20:48.552701   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553183   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553216   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:48.553135   67258 retry.go:31] will retry after 1.485919187s: waiting for machine to come up
	I0723 15:20:50.040417   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040886   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:50.040808   67258 retry.go:31] will retry after 2.212005186s: waiting for machine to come up
	I0723 15:20:50.444583   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.905273   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.262683   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.763266   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.263151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.763313   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.262366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.763167   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.263068   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.762864   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.262305   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.762857   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.254679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255094   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:52.255018   67258 retry.go:31] will retry after 2.737596804s: waiting for machine to come up
	I0723 15:20:54.995373   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:54.995633   67258 retry.go:31] will retry after 2.363037622s: waiting for machine to come up
	I0723 15:20:55.405124   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:57.405898   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.767191   64842 start.go:364] duration metric: took 55.07978775s to acquireMachinesLock for "no-preload-543029"
	I0723 15:21:01.767250   64842 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:21:01.767261   64842 fix.go:54] fixHost starting: 
	I0723 15:21:01.767727   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:01.767763   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:01.785721   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0723 15:21:01.786113   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:01.786792   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:01.786819   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:01.787127   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:01.787328   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:01.787485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:01.789046   64842 fix.go:112] recreateIfNeeded on no-preload-543029: state=Stopped err=<nil>
	I0723 15:21:01.789080   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	W0723 15:21:01.789255   64842 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:21:01.791610   64842 out.go:177] * Restarting existing kvm2 VM for "no-preload-543029" ...
	I0723 15:20:57.263221   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.262445   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.762456   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.263288   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.763206   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.263158   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.762517   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.263183   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.762347   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.362159   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362567   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362593   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:57.362539   67258 retry.go:31] will retry after 2.888037123s: waiting for machine to come up
	I0723 15:21:00.253973   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.254583   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Found IP for machine: 192.168.61.64
	I0723 15:21:00.254603   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserving static IP address...
	I0723 15:21:00.254630   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has current primary IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.255048   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserved static IP address: 192.168.61.64
	I0723 15:21:00.255074   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for SSH to be available...
	I0723 15:21:00.255105   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.255130   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | skip adding static IP to network mk-default-k8s-diff-port-911217 - found existing host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"}
	I0723 15:21:00.255145   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Getting to WaitForSSH function...
	I0723 15:21:00.257683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258026   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.258054   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258147   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH client type: external
	I0723 15:21:00.258176   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa (-rw-------)
	I0723 15:21:00.258208   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:00.258220   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | About to run SSH command:
	I0723 15:21:00.258240   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | exit 0
	I0723 15:21:00.382323   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:00.382710   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetConfigRaw
	I0723 15:21:00.383397   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.386258   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.386718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386918   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:21:00.387143   66641 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:00.387164   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:00.387412   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.389494   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389798   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.389824   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389917   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.390082   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390237   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390438   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.390628   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.390842   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.390857   66641 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:00.486433   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:00.486468   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486725   66641 buildroot.go:166] provisioning hostname "default-k8s-diff-port-911217"
	I0723 15:21:00.486750   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486948   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.489770   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490120   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.490149   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490273   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.490475   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490671   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490882   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.491062   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.491230   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.491246   66641 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-911217 && echo "default-k8s-diff-port-911217" | sudo tee /etc/hostname
	I0723 15:21:00.603917   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-911217
	
	I0723 15:21:00.603953   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.606538   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.606898   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.606943   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.607069   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.607306   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607525   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607711   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.607920   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.608129   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.608147   66641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-911217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-911217/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-911217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:00.710852   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:00.710887   66641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:00.710915   66641 buildroot.go:174] setting up certificates
	I0723 15:21:00.710928   66641 provision.go:84] configureAuth start
	I0723 15:21:00.710939   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.711205   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.714141   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714519   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.714552   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714765   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.717395   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.717739   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717939   66641 provision.go:143] copyHostCerts
	I0723 15:21:00.718004   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:00.718020   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:00.718115   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:00.718237   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:00.718250   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:00.718284   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:00.718373   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:00.718401   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:00.718431   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:00.718522   66641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-911217 san=[127.0.0.1 192.168.61.64 default-k8s-diff-port-911217 localhost minikube]
	I0723 15:21:01.133831   66641 provision.go:177] copyRemoteCerts
	I0723 15:21:01.133894   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:01.133919   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.136913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.137359   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.137782   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.137944   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.138115   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.217531   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:01.241478   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0723 15:21:01.265056   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:21:01.287281   66641 provision.go:87] duration metric: took 576.341839ms to configureAuth
	I0723 15:21:01.287317   66641 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:01.287496   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:01.287579   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.290157   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290640   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.290668   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290775   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.290978   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.291509   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.291673   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.291688   66641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:01.540756   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:01.540783   66641 machine.go:97] duration metric: took 1.153625976s to provisionDockerMachine
	I0723 15:21:01.540796   66641 start.go:293] postStartSetup for "default-k8s-diff-port-911217" (driver="kvm2")
	I0723 15:21:01.540809   66641 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:01.540827   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.541189   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:01.541225   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.544068   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544486   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.544511   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544600   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.544788   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.544945   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.545154   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.625316   66641 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:01.629446   66641 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:01.629469   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:01.629529   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:01.629634   66641 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:01.629759   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:01.639896   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:01.663515   66641 start.go:296] duration metric: took 122.707128ms for postStartSetup
	I0723 15:21:01.663551   66641 fix.go:56] duration metric: took 19.276599962s for fixHost
	I0723 15:21:01.663569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.666406   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.666830   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.666861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.667086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.667290   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667487   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.667895   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.668100   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.668116   66641 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:01.767011   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748061.738020629
	
	I0723 15:21:01.767035   66641 fix.go:216] guest clock: 1721748061.738020629
	I0723 15:21:01.767043   66641 fix.go:229] Guest: 2024-07-23 15:21:01.738020629 +0000 UTC Remote: 2024-07-23 15:21:01.66355459 +0000 UTC m=+139.710056956 (delta=74.466039ms)
	I0723 15:21:01.767088   66641 fix.go:200] guest clock delta is within tolerance: 74.466039ms
	I0723 15:21:01.767097   66641 start.go:83] releasing machines lock for "default-k8s-diff-port-911217", held for 19.380180818s
	I0723 15:21:01.767122   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.767446   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:01.770143   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770575   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.770607   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770771   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771336   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771513   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771672   66641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:01.771722   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.771767   66641 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:01.771792   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.774913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775261   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775401   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775440   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775651   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.775783   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775835   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775851   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.775933   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.776044   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776119   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.776196   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.776293   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776455   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.887716   66641 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:01.894935   66641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:59.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.906133   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:02.040633   66641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:02.047908   66641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:02.047982   66641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:02.067565   66641 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:02.067589   66641 start.go:495] detecting cgroup driver to use...
	I0723 15:21:02.067648   66641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:02.083334   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:02.096435   66641 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:02.096501   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:02.109497   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:02.122475   66641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:02.238156   66641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:02.413213   66641 docker.go:233] disabling docker service ...
	I0723 15:21:02.413321   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:02.431076   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:02.443590   66641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:02.565848   66641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:02.708530   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:02.724781   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:02.744261   66641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:21:02.744317   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.755864   66641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:02.755939   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.768381   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.779157   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.789500   66641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:02.801063   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.812845   66641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.828742   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.840605   66641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:02.849796   66641 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:02.849866   66641 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:02.862982   66641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:02.874354   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:03.017881   66641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:03.157623   66641 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:03.157699   66641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:03.162343   66641 start.go:563] Will wait 60s for crictl version
	I0723 15:21:03.162429   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:21:03.166092   66641 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:03.203681   66641 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:03.203775   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.230722   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.257801   66641 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:21:01.793112   64842 main.go:141] libmachine: (no-preload-543029) Calling .Start
	I0723 15:21:01.793305   64842 main.go:141] libmachine: (no-preload-543029) Ensuring networks are active...
	I0723 15:21:01.794004   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network default is active
	I0723 15:21:01.794444   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network mk-no-preload-543029 is active
	I0723 15:21:01.794908   64842 main.go:141] libmachine: (no-preload-543029) Getting domain xml...
	I0723 15:21:01.795563   64842 main.go:141] libmachine: (no-preload-543029) Creating domain...
	I0723 15:21:03.126716   64842 main.go:141] libmachine: (no-preload-543029) Waiting to get IP...
	I0723 15:21:03.127667   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.128113   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.128193   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.128095   67435 retry.go:31] will retry after 265.57265ms: waiting for machine to come up
	I0723 15:21:03.395811   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.396355   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.396382   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.396301   67435 retry.go:31] will retry after 304.545362ms: waiting for machine to come up
	I0723 15:21:03.702841   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.703303   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.703332   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.703241   67435 retry.go:31] will retry after 326.35473ms: waiting for machine to come up
	I0723 15:21:04.032032   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.032670   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.032695   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.032568   67435 retry.go:31] will retry after 515.672537ms: waiting for machine to come up
	I0723 15:21:04.550461   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.550989   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.551019   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.550942   67435 retry.go:31] will retry after 735.237546ms: waiting for machine to come up
	I0723 15:21:05.287672   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.288362   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.288393   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.288259   67435 retry.go:31] will retry after 683.55844ms: waiting for machine to come up
	I0723 15:21:02.262289   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.763009   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.262852   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.763260   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.262964   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.762673   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.263335   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.762790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.262830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.762830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.259168   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:03.262241   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:03.262748   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262930   66641 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:03.266969   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:03.278873   66641 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:03.279019   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:21:03.279076   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.318295   66641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:21:03.318390   66641 ssh_runner.go:195] Run: which lz4
	I0723 15:21:03.322441   66641 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:21:03.326818   66641 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:21:03.326857   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:21:04.624581   66641 crio.go:462] duration metric: took 1.302205276s to copy over tarball
	I0723 15:21:04.624665   66641 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:21:06.913370   66641 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.288673981s)
	I0723 15:21:06.913403   66641 crio.go:469] duration metric: took 2.288793517s to extract the tarball
	I0723 15:21:06.913413   66641 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:21:06.951820   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.906766   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:06.405854   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:05.973409   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.973872   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.973920   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.973856   67435 retry.go:31] will retry after 728.120188ms: waiting for machine to come up
	I0723 15:21:06.703125   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:06.703631   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:06.703661   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:06.703554   67435 retry.go:31] will retry after 1.052851436s: waiting for machine to come up
	I0723 15:21:07.758261   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:07.758823   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:07.758853   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:07.758766   67435 retry.go:31] will retry after 1.533027844s: waiting for machine to come up
	I0723 15:21:09.293721   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:09.294204   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:09.294230   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:09.294169   67435 retry.go:31] will retry after 1.399702148s: waiting for machine to come up
	I0723 15:21:07.262935   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.762473   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.262990   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.262850   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.762245   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.263207   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.762516   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.263298   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.762853   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.993755   66641 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:21:06.993783   66641 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:21:06.993793   66641 kubeadm.go:934] updating node { 192.168.61.64 8444 v1.30.3 crio true true} ...
	I0723 15:21:06.993917   66641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-911217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:06.993994   66641 ssh_runner.go:195] Run: crio config
	I0723 15:21:07.040966   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:07.040991   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:07.041014   66641 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:07.041040   66641 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.64 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-911217 NodeName:default-k8s-diff-port-911217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:07.041222   66641 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.64
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-911217"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:07.041284   66641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:21:07.051498   66641 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:07.051567   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:07.060752   66641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0723 15:21:07.078362   66641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:21:07.093890   66641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0723 15:21:07.121632   66641 ssh_runner.go:195] Run: grep 192.168.61.64	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:07.126674   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:07.139521   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:07.264702   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:07.286475   66641 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217 for IP: 192.168.61.64
	I0723 15:21:07.286499   66641 certs.go:194] generating shared ca certs ...
	I0723 15:21:07.286521   66641 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:07.286750   66641 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:07.286814   66641 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:07.286829   66641 certs.go:256] generating profile certs ...
	I0723 15:21:07.286928   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/client.key
	I0723 15:21:07.286986   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key.a1750142
	I0723 15:21:07.287041   66641 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key
	I0723 15:21:07.287151   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:07.287182   66641 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:07.287191   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:07.287210   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:07.287233   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:07.287257   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:07.287288   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:07.288006   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:07.331680   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:07.378132   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:07.423720   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:07.462077   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 15:21:07.489608   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:21:07.511619   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:07.535480   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:07.557870   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:07.579317   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:07.601107   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:07.622717   66641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:07.638728   66641 ssh_runner.go:195] Run: openssl version
	I0723 15:21:07.644065   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:07.654161   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658261   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658335   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.663893   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:07.673883   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:07.684409   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688657   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688710   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.694037   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:07.704621   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:07.714866   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719090   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719133   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.724797   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:07.734660   66641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:07.739005   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:07.744615   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:07.749912   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:07.755350   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:07.760833   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:07.766701   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:07.773611   66641 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:07.773724   66641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:07.773788   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.812612   66641 cri.go:89] found id: ""
	I0723 15:21:07.812689   66641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:07.822628   66641 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:07.822648   66641 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:07.822699   66641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:07.831812   66641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:07.833459   66641 kubeconfig.go:125] found "default-k8s-diff-port-911217" server: "https://192.168.61.64:8444"
	I0723 15:21:07.836425   66641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:07.846945   66641 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.64
	I0723 15:21:07.846976   66641 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:07.846989   66641 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:07.847046   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.881091   66641 cri.go:89] found id: ""
	I0723 15:21:07.881180   66641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:07.900373   66641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:07.912010   66641 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:07.912035   66641 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:07.912092   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0723 15:21:07.920903   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:07.920981   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:07.930186   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0723 15:21:07.938825   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:07.938891   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:07.947852   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.957007   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:07.957076   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.966642   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0723 15:21:07.975395   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:07.975457   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:07.984363   66641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:07.993997   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:08.112135   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.260639   66641 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1484675s)
	I0723 15:21:09.260677   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.481542   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.546998   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.657302   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:09.657407   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.157632   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.658193   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.694922   66641 api_server.go:72] duration metric: took 1.037619978s to wait for apiserver process to appear ...
	I0723 15:21:10.694957   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:10.694980   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:08.406647   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:10.907117   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:13.783814   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.783855   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:13.783874   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:13.828920   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.828952   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:14.195191   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.199330   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.199350   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:14.695758   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.703433   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.703471   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:15.196096   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:15.200578   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:21:15.208499   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:21:15.208523   66641 api_server.go:131] duration metric: took 4.513559684s to wait for apiserver health ...
	I0723 15:21:15.208532   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:15.208539   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:15.210371   66641 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:10.696028   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:10.696532   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:10.696556   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:10.696480   67435 retry.go:31] will retry after 1.754927597s: waiting for machine to come up
	I0723 15:21:12.452705   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:12.453135   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:12.453164   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:12.453082   67435 retry.go:31] will retry after 2.354607493s: waiting for machine to come up
	I0723 15:21:14.809924   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:14.810438   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:14.810467   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:14.810400   67435 retry.go:31] will retry after 4.422072307s: waiting for machine to come up
	I0723 15:21:12.262754   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.762339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.262358   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.762291   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.262339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.762796   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.263008   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.762225   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.263100   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.762356   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.211787   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:15.226475   66641 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:15.245284   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:15.253756   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:15.253789   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:15.253798   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:15.253805   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:15.253815   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:15.253822   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:15.253828   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:15.253833   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:15.253838   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:15.253844   66641 system_pods.go:74] duration metric: took 8.537438ms to wait for pod list to return data ...
	I0723 15:21:15.253853   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:15.258127   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:15.258153   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:15.258163   66641 node_conditions.go:105] duration metric: took 4.305171ms to run NodePressure ...
	I0723 15:21:15.258177   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:15.533298   66641 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541967   66641 kubeadm.go:739] kubelet initialised
	I0723 15:21:15.541987   66641 kubeadm.go:740] duration metric: took 8.645977ms waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541995   66641 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:15.549557   66641 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.553971   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554002   66641 pod_ready.go:81] duration metric: took 4.418498ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.554013   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554022   66641 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.558017   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558040   66641 pod_ready.go:81] duration metric: took 4.009013ms for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.558050   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558058   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.562197   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562219   66641 pod_ready.go:81] duration metric: took 4.154836ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.562228   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562234   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.649441   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649466   66641 pod_ready.go:81] duration metric: took 87.224782ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.649477   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649484   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.049016   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049052   66641 pod_ready.go:81] duration metric: took 399.56194ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.049063   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049071   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.449193   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449221   66641 pod_ready.go:81] duration metric: took 400.140989ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.449231   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449239   66641 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.849035   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849069   66641 pod_ready.go:81] duration metric: took 399.822211ms for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.849080   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849087   66641 pod_ready.go:38] duration metric: took 1.307085242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:16.849102   66641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:16.860322   66641 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:16.860344   66641 kubeadm.go:597] duration metric: took 9.037689802s to restartPrimaryControlPlane
	I0723 15:21:16.860353   66641 kubeadm.go:394] duration metric: took 9.086749188s to StartCluster
	I0723 15:21:16.860368   66641 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.860445   66641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:16.862706   66641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.863010   66641 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:16.863105   66641 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:16.863162   66641 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863183   66641 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863194   66641 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863201   66641 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:16.863202   66641 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863218   66641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-911217"
	I0723 15:21:16.863225   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863235   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:16.863261   66641 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863272   66641 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:16.863304   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863517   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863547   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863553   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863566   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863584   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863612   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.864773   66641 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:16.866155   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:16.879697   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0723 15:21:16.880186   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.880765   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.880786   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.881122   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.881681   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.881712   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.882675   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0723 15:21:16.883162   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.883709   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.883730   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.883748   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0723 15:21:16.884082   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.884138   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.884609   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.884639   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.884610   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.884699   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.885040   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.885254   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.888611   66641 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.888627   66641 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:16.888651   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.888916   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.888944   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.899013   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0723 15:21:16.899458   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.900188   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.900208   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.900593   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.900786   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.902589   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0723 15:21:16.903091   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.903189   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.904095   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.904118   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.904576   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.904810   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.905242   66641 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:16.905443   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0723 15:21:16.905849   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.906358   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.906375   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.906491   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:16.906512   66641 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:16.906533   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.906766   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.906920   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.907374   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.907409   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.909637   66641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:16.910635   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911126   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.911154   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.911534   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.911683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.911859   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.913408   66641 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:16.913435   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:16.913456   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.916884   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.917338   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917647   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.917896   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.918061   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.918207   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.930880   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0723 15:21:16.931386   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.931925   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.931951   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.932292   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.932495   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.934404   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.934645   66641 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:16.934659   66641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:16.934675   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.937624   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.937991   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.938013   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.938166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.938342   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.938523   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.938695   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:13.407459   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:15.906352   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:17.068411   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:17.084266   66641 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:17.189089   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:17.189118   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:17.205584   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:17.205623   66641 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:17.209103   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:17.224264   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:17.245125   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:17.245152   66641 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:17.272564   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:18.245078   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020778604s)
	I0723 15:21:18.245165   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036025141s)
	I0723 15:21:18.245186   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245195   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245209   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245213   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245201   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245513   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245526   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245543   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245550   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245633   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245648   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245657   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245665   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245682   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245695   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245703   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245723   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245842   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245872   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245903   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245911   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245928   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245932   66641 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-911217"
	I0723 15:21:18.245982   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245987   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.246004   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.251643   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.251660   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.251879   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.251889   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.253737   66641 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:19.235665   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236110   64842 main.go:141] libmachine: (no-preload-543029) Found IP for machine: 192.168.72.227
	I0723 15:21:19.236141   64842 main.go:141] libmachine: (no-preload-543029) Reserving static IP address...
	I0723 15:21:19.236154   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has current primary IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236541   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.236571   64842 main.go:141] libmachine: (no-preload-543029) DBG | skip adding static IP to network mk-no-preload-543029 - found existing host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"}
	I0723 15:21:19.236586   64842 main.go:141] libmachine: (no-preload-543029) Reserved static IP address: 192.168.72.227
	I0723 15:21:19.236601   64842 main.go:141] libmachine: (no-preload-543029) Waiting for SSH to be available...
	I0723 15:21:19.236613   64842 main.go:141] libmachine: (no-preload-543029) DBG | Getting to WaitForSSH function...
	I0723 15:21:19.239149   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239453   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.239481   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239620   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH client type: external
	I0723 15:21:19.239651   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa (-rw-------)
	I0723 15:21:19.239677   64842 main.go:141] libmachine: (no-preload-543029) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:19.239691   64842 main.go:141] libmachine: (no-preload-543029) DBG | About to run SSH command:
	I0723 15:21:19.239700   64842 main.go:141] libmachine: (no-preload-543029) DBG | exit 0
	I0723 15:21:19.366227   64842 main.go:141] libmachine: (no-preload-543029) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:19.366646   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetConfigRaw
	I0723 15:21:19.367309   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.370038   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370401   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.370430   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370756   64842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/config.json ...
	I0723 15:21:19.370949   64842 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:19.370966   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:19.371186   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.373506   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.373912   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.373977   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.374053   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.374259   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374465   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374635   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.374805   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.374996   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.375009   64842 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:19.482523   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:19.482551   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482771   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:21:19.482796   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482975   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.485520   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.485868   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.485898   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.486084   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.486300   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486483   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486634   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.486777   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.486998   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.487019   64842 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-543029 && echo "no-preload-543029" | sudo tee /etc/hostname
	I0723 15:21:19.609064   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-543029
	
	I0723 15:21:19.609100   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.611746   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612087   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.612133   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612276   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.612477   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612663   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612845   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.612979   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.613158   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.613180   64842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-543029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-543029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-543029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:19.731696   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:19.731721   64842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:19.731740   64842 buildroot.go:174] setting up certificates
	I0723 15:21:19.731748   64842 provision.go:84] configureAuth start
	I0723 15:21:19.731755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.732051   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.735016   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.735425   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735608   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.737908   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738267   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.738317   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738482   64842 provision.go:143] copyHostCerts
	I0723 15:21:19.738556   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:19.738571   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:19.738641   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:19.738746   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:19.738755   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:19.738779   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:19.738852   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:19.738866   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:19.738887   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:19.738965   64842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.no-preload-543029 san=[127.0.0.1 192.168.72.227 localhost minikube no-preload-543029]
	I0723 15:21:20.020845   64842 provision.go:177] copyRemoteCerts
	I0723 15:21:20.020921   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:20.020954   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.023907   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024341   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.024363   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024531   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.024799   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.024973   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.025138   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.113238   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:20.136690   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 15:21:20.161178   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:21:20.184741   64842 provision.go:87] duration metric: took 452.982716ms to configureAuth
	I0723 15:21:20.184767   64842 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:20.184992   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:20.185076   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.187893   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188209   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.188235   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188473   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.188684   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.188883   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.189026   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.189181   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.189379   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.189397   64842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:17.263163   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.762332   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.263184   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.762413   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.263050   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.762396   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.263052   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.763027   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.263244   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.762584   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.255042   66641 addons.go:510] duration metric: took 1.391938603s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:19.089229   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:21.587960   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:20.463609   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:20.463657   64842 machine.go:97] duration metric: took 1.092694849s to provisionDockerMachine
	I0723 15:21:20.463670   64842 start.go:293] postStartSetup for "no-preload-543029" (driver="kvm2")
	I0723 15:21:20.463684   64842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:20.463705   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.464063   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:20.464093   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.467027   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.467429   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467606   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.467785   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.467938   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.468096   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.556442   64842 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:20.561477   64842 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:20.561506   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:20.561590   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:20.561694   64842 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:20.561814   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:20.574431   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:20.603531   64842 start.go:296] duration metric: took 139.847057ms for postStartSetup
	I0723 15:21:20.603578   64842 fix.go:56] duration metric: took 18.836315993s for fixHost
	I0723 15:21:20.603644   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.606820   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607184   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.607230   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607410   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.607660   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607851   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607999   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.608191   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.608373   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.608383   64842 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:20.718722   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748080.694505305
	
	I0723 15:21:20.718755   64842 fix.go:216] guest clock: 1721748080.694505305
	I0723 15:21:20.718764   64842 fix.go:229] Guest: 2024-07-23 15:21:20.694505305 +0000 UTC Remote: 2024-07-23 15:21:20.603582679 +0000 UTC m=+365.240688683 (delta=90.922626ms)
	I0723 15:21:20.718796   64842 fix.go:200] guest clock delta is within tolerance: 90.922626ms
	I0723 15:21:20.718801   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 18.9515773s
	I0723 15:21:20.718818   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.719088   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:20.721851   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722269   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.722292   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722527   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723046   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723231   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723328   64842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:20.723377   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.723460   64842 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:20.723485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.726596   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.726987   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727022   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727041   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727142   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727329   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.727475   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727498   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.727510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727638   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727707   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.728003   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.728170   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.728341   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.841462   64842 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:20.847787   64842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:21:20.998310   64842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:21.004048   64842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:21.004125   64842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:21.019676   64842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:21.019699   64842 start.go:495] detecting cgroup driver to use...
	I0723 15:21:21.019773   64842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:21.034888   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:21.049886   64842 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:21.049949   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:21.063974   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:21.077306   64842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:21.195936   64842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:21.355002   64842 docker.go:233] disabling docker service ...
	I0723 15:21:21.355090   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:21.370421   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:21.382910   64842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:21.493040   64842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:21.610670   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:21.623845   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:21.641461   64842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:21:21.641518   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.651025   64842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:21.651096   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.661449   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.671431   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.681681   64842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:21.692696   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.702592   64842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.720041   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.730075   64842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:21.739621   64842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:21.739686   64842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:21.752036   64842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:21.761412   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:21.902842   64842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:22.032458   64842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:22.032545   64842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:22.037229   64842 start.go:563] Will wait 60s for crictl version
	I0723 15:21:22.037309   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.040918   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:22.081102   64842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:22.081203   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.111862   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.140842   64842 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:21:18.404301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:20.406322   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.406365   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.142110   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:22.144996   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145342   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:22.145382   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145651   64842 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:22.149630   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:22.161308   64842 kubeadm.go:883] updating cluster {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:22.161457   64842 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:21:22.161507   64842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:22.196099   64842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0723 15:21:22.196122   64842 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:21:22.196180   64842 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.196197   64842 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.196257   64842 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0723 15:21:22.196270   64842 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.196280   64842 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.196391   64842 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.196430   64842 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.196256   64842 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.197600   64842 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197611   64842 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.197612   64842 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.197603   64842 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.197632   64842 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.197855   64842 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0723 15:21:22.453013   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.456128   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.457426   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.457660   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.468840   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.488855   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0723 15:21:22.498800   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.521182   64842 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0723 15:21:22.521236   64842 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.521282   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.606761   64842 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0723 15:21:22.606814   64842 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.606863   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626104   64842 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0723 15:21:22.626139   64842 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0723 15:21:22.626148   64842 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.626171   64842 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626405   64842 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0723 15:21:22.626436   64842 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.626497   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.739834   64842 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0723 15:21:22.739888   64842 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.739923   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.739972   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.739931   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.740025   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.740028   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.740087   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.754758   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.903466   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0723 15:21:22.903526   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903582   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.903618   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903475   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903669   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903725   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903738   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903808   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0723 15:21:22.903870   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:22.903977   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.904112   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.916856   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0723 15:21:22.916880   64842 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.916927   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.917993   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918778   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918818   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918846   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0723 15:21:22.918919   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0723 15:21:23.126109   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916361   64842 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.790200633s)
	I0723 15:21:24.916416   64842 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0723 15:21:24.916450   64842 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916477   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.999519999s)
	I0723 15:21:24.916501   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:24.916502   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0723 15:21:24.916528   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.916570   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.921489   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.262373   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.762746   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.263229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.763195   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.262446   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.762506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.262490   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.263073   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.762900   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.087763   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:24.588088   66641 node_ready.go:49] node "default-k8s-diff-port-911217" has status "Ready":"True"
	I0723 15:21:24.588115   66641 node_ready.go:38] duration metric: took 7.503814941s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:24.588126   66641 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:24.593658   66641 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598755   66641 pod_ready.go:92] pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:24.598780   66641 pod_ready.go:81] duration metric: took 5.095349ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598792   66641 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:26.605401   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:24.906330   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:26.906460   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:27.393601   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.477002958s)
	I0723 15:21:27.393621   64842 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.472105782s)
	I0723 15:21:27.393640   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0723 15:21:27.393664   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393665   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 15:21:27.393707   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393763   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:29.040178   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.646445558s)
	I0723 15:21:29.040216   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0723 15:21:29.040222   64842 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.64643284s)
	I0723 15:21:29.040248   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0723 15:21:29.040252   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:29.040316   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:27.262530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.762666   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.262506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.762908   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.262943   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.763041   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.263200   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.762855   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.262991   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.605685   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:29.107082   66641 pod_ready.go:92] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.107106   66641 pod_ready.go:81] duration metric: took 4.508306433s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.107117   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112506   66641 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.112529   66641 pod_ready.go:81] duration metric: took 5.405596ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112564   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117710   66641 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.117736   66641 pod_ready.go:81] duration metric: took 5.161856ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117748   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122182   66641 pod_ready.go:92] pod "kube-proxy-d4zwd" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.122207   66641 pod_ready.go:81] duration metric: took 4.450531ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122218   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126407   66641 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.126428   66641 pod_ready.go:81] duration metric: took 4.201792ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126439   66641 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:31.133392   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:28.967873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.404672   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.100302   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.059957757s)
	I0723 15:21:31.100343   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0723 15:21:31.100373   64842 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:31.100425   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:34.291526   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.191073801s)
	I0723 15:21:34.291561   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0723 15:21:34.291588   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:34.291639   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:32.262345   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.762530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.262472   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.763055   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.262344   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.762962   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.262594   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.762498   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.263210   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.763229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.631906   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.632672   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:33.405404   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.906326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.650341   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358679252s)
	I0723 15:21:35.650368   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0723 15:21:35.650412   64842 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:35.650450   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:36.307948   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0723 15:21:36.307992   64842 cache_images.go:123] Successfully loaded all cached images
	I0723 15:21:36.307999   64842 cache_images.go:92] duration metric: took 14.11186471s to LoadCachedImages
	I0723 15:21:36.308012   64842 kubeadm.go:934] updating node { 192.168.72.227 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:21:36.308139   64842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-543029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:36.308223   64842 ssh_runner.go:195] Run: crio config
	I0723 15:21:36.353489   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:36.353510   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:36.353521   64842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:36.353549   64842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-543029 NodeName:no-preload-543029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:36.353706   64842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-543029"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:36.353774   64842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:21:36.363814   64842 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:36.363887   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:36.372484   64842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0723 15:21:36.388450   64842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:21:36.404404   64842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0723 15:21:36.420801   64842 ssh_runner.go:195] Run: grep 192.168.72.227	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:36.424596   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:36.436558   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:36.563903   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:36.580045   64842 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029 for IP: 192.168.72.227
	I0723 15:21:36.580108   64842 certs.go:194] generating shared ca certs ...
	I0723 15:21:36.580133   64842 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:36.580339   64842 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:36.580409   64842 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:36.580423   64842 certs.go:256] generating profile certs ...
	I0723 15:21:36.580538   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.key
	I0723 15:21:36.580633   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key.1fcf66d2
	I0723 15:21:36.580678   64842 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key
	I0723 15:21:36.580818   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:36.580856   64842 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:36.580866   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:36.580899   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:36.580934   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:36.580968   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:36.581017   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:36.581890   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:36.617903   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:36.650101   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:36.690040   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:36.716216   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 15:21:36.740583   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:21:36.764801   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:36.798418   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:36.821594   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:36.843862   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:36.866577   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:36.888178   64842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:36.903980   64842 ssh_runner.go:195] Run: openssl version
	I0723 15:21:36.910344   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:36.920792   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925317   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925372   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.931375   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:36.941782   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:36.952943   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957594   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957643   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.963465   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:36.974471   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:36.984631   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989126   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989180   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.994580   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:37.004372   64842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:37.009492   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:37.016189   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:37.023648   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:37.030369   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:37.036358   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:37.042504   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:37.048396   64842 kubeadm.go:392] StartCluster: {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:37.048473   64842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:37.048542   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.085642   64842 cri.go:89] found id: ""
	I0723 15:21:37.085711   64842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:37.095789   64842 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:37.095809   64842 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:37.095861   64842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:37.105817   64842 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:37.106841   64842 kubeconfig.go:125] found "no-preload-543029" server: "https://192.168.72.227:8443"
	I0723 15:21:37.109115   64842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:37.118333   64842 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.227
	I0723 15:21:37.118365   64842 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:37.118389   64842 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:37.118442   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.160393   64842 cri.go:89] found id: ""
	I0723 15:21:37.160465   64842 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:37.175866   64842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:37.184719   64842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:37.184737   64842 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:37.184796   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:21:37.192836   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:37.192893   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:37.201472   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:21:37.209448   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:37.209509   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:37.217692   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.225746   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:37.225792   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.234312   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:21:37.242796   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:37.242853   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:37.251655   64842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:37.260393   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:37.372906   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.228191   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.438949   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.503088   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.588692   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:38.588787   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.089205   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.589266   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.609653   64842 api_server.go:72] duration metric: took 1.020961559s to wait for apiserver process to appear ...
	I0723 15:21:39.609681   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:39.609703   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:39.610233   64842 api_server.go:269] stopped: https://192.168.72.227:8443/healthz: Get "https://192.168.72.227:8443/healthz": dial tcp 192.168.72.227:8443: connect: connection refused
	I0723 15:21:40.110036   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:37.263268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.763001   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.263263   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.762567   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.262510   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.762366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.263091   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.762546   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.263115   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.762511   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.133459   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.634011   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:38.405042   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.405301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.406499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.755036   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.755081   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:42.755102   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:42.774722   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.774753   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:43.110105   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.114521   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.114549   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:43.610681   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.619976   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.620012   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:44.110574   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:44.117164   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:21:44.125459   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:21:44.125487   64842 api_server.go:131] duration metric: took 4.515798224s to wait for apiserver health ...
	I0723 15:21:44.125500   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:44.125508   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:44.127031   64842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:44.128250   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:44.156441   64842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:44.190002   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:44.202487   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:44.202543   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:44.202558   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:44.202570   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:44.202580   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:44.202597   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:44.202611   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:44.202623   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:44.202635   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:44.202649   64842 system_pods.go:74] duration metric: took 12.618106ms to wait for pod list to return data ...
	I0723 15:21:44.202663   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:44.208561   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:44.208598   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:44.208613   64842 node_conditions.go:105] duration metric: took 5.939597ms to run NodePressure ...
	I0723 15:21:44.208637   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:44.527115   64842 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531381   64842 kubeadm.go:739] kubelet initialised
	I0723 15:21:44.531403   64842 kubeadm.go:740] duration metric: took 4.261609ms waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531410   64842 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:44.536741   64842 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.542345   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542367   64842 pod_ready.go:81] duration metric: took 5.603228ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.542376   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542409   64842 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.547170   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547202   64842 pod_ready.go:81] duration metric: took 4.783034ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.547214   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547223   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.552220   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552239   64842 pod_ready.go:81] duration metric: took 5.010275ms for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.552247   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552252   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.593233   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593263   64842 pod_ready.go:81] duration metric: took 41.002989ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.593275   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593284   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.993527   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993556   64842 pod_ready.go:81] duration metric: took 400.24962ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.993567   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993575   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.393187   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393215   64842 pod_ready.go:81] duration metric: took 399.632229ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.393224   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393230   64842 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.794005   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794039   64842 pod_ready.go:81] duration metric: took 400.798877ms for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.794050   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794061   64842 pod_ready.go:38] duration metric: took 1.262643249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:45.794082   64842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:45.806575   64842 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:45.806604   64842 kubeadm.go:597] duration metric: took 8.710787698s to restartPrimaryControlPlane
	I0723 15:21:45.806616   64842 kubeadm.go:394] duration metric: took 8.758224212s to StartCluster
	I0723 15:21:45.806636   64842 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.806714   64842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:45.808707   64842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.808950   64842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:45.809024   64842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:45.809108   64842 addons.go:69] Setting storage-provisioner=true in profile "no-preload-543029"
	I0723 15:21:45.809121   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:45.809144   64842 addons.go:234] Setting addon storage-provisioner=true in "no-preload-543029"
	I0723 15:21:45.809148   64842 addons.go:69] Setting default-storageclass=true in profile "no-preload-543029"
	I0723 15:21:45.809158   64842 addons.go:69] Setting metrics-server=true in profile "no-preload-543029"
	I0723 15:21:45.809186   64842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-543029"
	I0723 15:21:45.809198   64842 addons.go:234] Setting addon metrics-server=true in "no-preload-543029"
	W0723 15:21:45.809207   64842 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:45.809233   64842 host.go:66] Checking if "no-preload-543029" exists ...
	W0723 15:21:45.809156   64842 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:45.809298   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.809533   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809566   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809615   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809650   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809666   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809694   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.810889   64842 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:45.812166   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:45.825877   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0723 15:21:45.826459   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.826873   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0723 15:21:45.827091   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827122   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.827302   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.827520   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.827785   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827809   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.828045   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.828076   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.828197   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.828404   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.828464   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0723 15:21:45.829160   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.829594   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.829617   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.830024   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.830679   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.830726   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.832633   64842 addons.go:234] Setting addon default-storageclass=true in "no-preload-543029"
	W0723 15:21:45.832654   64842 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:45.832683   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.833024   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.833067   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.848944   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0723 15:21:45.849974   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.850455   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0723 15:21:45.850916   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.850938   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.851135   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.851254   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.851443   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.852354   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.852373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.852472   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0723 15:21:45.852797   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.853534   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.853613   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.853820   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.854337   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.854373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.854866   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.855572   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.855606   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.855642   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.855829   64842 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:45.857645   64842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:45.857658   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:45.857676   64842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:45.857695   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:42.262868   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.762469   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.262898   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.762342   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.262359   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.763149   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.263062   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.763109   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.262592   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.763170   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.132245   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.633648   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.859112   64842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:45.859130   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:45.859146   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.861510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862069   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.862099   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862362   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.862596   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.862842   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.863077   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.863162   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.864192   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.864223   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.864257   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.864446   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.864602   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.864750   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.901172   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0723 15:21:45.901604   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.902073   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.902096   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.902455   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.902711   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.904749   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.905713   64842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:45.905736   64842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:45.905755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.909130   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909598   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.909655   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909882   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.910025   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.910171   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.910413   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:46.014049   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:46.040760   64842 node_ready.go:35] waiting up to 6m0s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:46.115180   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:46.144610   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:46.144632   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:46.164354   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:46.181905   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:46.181929   64842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:46.241734   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:46.241764   64842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:46.267086   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:47.396441   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.281225615s)
	I0723 15:21:47.396460   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232072139s)
	I0723 15:21:47.396498   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396512   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396529   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396544   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129426841s)
	I0723 15:21:47.396591   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396611   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396879   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396894   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396904   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396912   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396927   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396948   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396958   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396973   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.397067   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397093   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397113   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397120   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397310   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397326   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397335   64842 addons.go:475] Verifying addon metrics-server=true in "no-preload-543029"
	I0723 15:21:47.398473   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398488   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.398497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.398504   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.398766   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398788   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.398805   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.420728   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.420747   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.421047   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.421067   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.423038   64842 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:44.409201   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:46.905099   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:47.424285   64842 addons.go:510] duration metric: took 1.615264126s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:48.044800   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:47.262743   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.762500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.262636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.762397   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.262912   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.763274   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.262631   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.762560   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.262984   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.763131   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:51.763218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:51.804139   65605 cri.go:89] found id: ""
	I0723 15:21:51.804167   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.804177   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:51.804185   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:51.804246   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:51.846025   65605 cri.go:89] found id: ""
	I0723 15:21:51.846052   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.846064   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:51.846070   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:51.846133   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:48.132371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.133097   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:49.405318   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.907543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.545198   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:53.045065   64842 node_ready.go:49] node "no-preload-543029" has status "Ready":"True"
	I0723 15:21:53.045092   64842 node_ready.go:38] duration metric: took 7.004300565s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:53.045103   64842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:53.051631   64842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056333   64842 pod_ready.go:92] pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.056391   64842 pod_ready.go:81] duration metric: took 4.723453ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056428   64842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061634   64842 pod_ready.go:92] pod "etcd-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.061654   64842 pod_ready.go:81] duration metric: took 5.217288ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061666   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:55.068882   64842 pod_ready.go:102] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.885398   65605 cri.go:89] found id: ""
	I0723 15:21:51.885431   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.885442   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:51.885450   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:51.885514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:51.919587   65605 cri.go:89] found id: ""
	I0723 15:21:51.919618   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.919630   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:51.919637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:51.919723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:51.955301   65605 cri.go:89] found id: ""
	I0723 15:21:51.955335   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.955342   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:51.955348   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:51.955397   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:51.988318   65605 cri.go:89] found id: ""
	I0723 15:21:51.988345   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.988355   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:51.988362   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:51.988419   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:52.023375   65605 cri.go:89] found id: ""
	I0723 15:21:52.023407   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.023418   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:52.023426   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:52.023498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:52.060183   65605 cri.go:89] found id: ""
	I0723 15:21:52.060205   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.060212   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:52.060221   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:52.060233   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:52.109904   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:52.109937   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:52.123292   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:52.123317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:52.253361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:52.253386   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:52.253401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:52.321684   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:52.321720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:54.859846   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:54.873167   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:54.873233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:54.909330   65605 cri.go:89] found id: ""
	I0723 15:21:54.909351   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.909359   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:54.909364   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:54.909412   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:54.943092   65605 cri.go:89] found id: ""
	I0723 15:21:54.943120   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.943131   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:54.943138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:54.943198   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:54.975051   65605 cri.go:89] found id: ""
	I0723 15:21:54.975080   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.975090   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:54.975098   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:54.975172   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:55.017552   65605 cri.go:89] found id: ""
	I0723 15:21:55.017580   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.017590   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:55.017596   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:55.017657   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:55.067857   65605 cri.go:89] found id: ""
	I0723 15:21:55.067887   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.067897   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:55.067903   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:55.067965   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:55.105194   65605 cri.go:89] found id: ""
	I0723 15:21:55.105224   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.105234   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:55.105242   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:55.105312   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:55.174421   65605 cri.go:89] found id: ""
	I0723 15:21:55.174451   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.174463   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:55.174470   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:55.174521   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:55.209007   65605 cri.go:89] found id: ""
	I0723 15:21:55.209032   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.209039   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:55.209048   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:55.209059   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:55.261075   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:55.261110   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:55.273629   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:55.273656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:55.348214   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:55.348237   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:55.348271   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:55.418341   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:55.418371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:52.134201   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.633089   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.405215   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.405377   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.068263   64842 pod_ready.go:92] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.068285   64842 pod_ready.go:81] duration metric: took 3.006610636s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.068294   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073245   64842 pod_ready.go:92] pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.073267   64842 pod_ready.go:81] duration metric: took 4.962522ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073275   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078816   64842 pod_ready.go:92] pod "kube-proxy-wzbps" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.078835   64842 pod_ready.go:81] duration metric: took 5.554703ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078843   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646678   64842 pod_ready.go:92] pod "kube-scheduler-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.646709   64842 pod_ready.go:81] duration metric: took 567.858812ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646722   64842 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:58.653962   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:57.956565   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:57.969980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:57.970054   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:58.002894   65605 cri.go:89] found id: ""
	I0723 15:21:58.002925   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.002943   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:58.002951   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:58.003018   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:58.034980   65605 cri.go:89] found id: ""
	I0723 15:21:58.035007   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.035017   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:58.035024   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:58.035090   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:58.068666   65605 cri.go:89] found id: ""
	I0723 15:21:58.068694   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.068702   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:58.068708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:58.068757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:58.102693   65605 cri.go:89] found id: ""
	I0723 15:21:58.102727   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.102737   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:58.102744   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:58.102807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:58.137492   65605 cri.go:89] found id: ""
	I0723 15:21:58.137521   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.137530   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:58.137535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:58.137590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:58.173616   65605 cri.go:89] found id: ""
	I0723 15:21:58.173640   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.173647   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:58.173654   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:58.173716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:58.206995   65605 cri.go:89] found id: ""
	I0723 15:21:58.207023   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.207033   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:58.207040   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:58.207100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:58.238476   65605 cri.go:89] found id: ""
	I0723 15:21:58.238504   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.238513   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:58.238525   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:58.238538   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:58.291074   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:58.291104   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:58.305305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:58.305349   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:58.379551   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:58.379572   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:58.379587   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:58.453253   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:58.453293   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:00.994715   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:01.010264   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:01.010359   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:01.065402   65605 cri.go:89] found id: ""
	I0723 15:22:01.065433   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.065443   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:01.065451   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:01.065511   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:01.115626   65605 cri.go:89] found id: ""
	I0723 15:22:01.115655   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.115666   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:01.115675   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:01.115737   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:01.155568   65605 cri.go:89] found id: ""
	I0723 15:22:01.155595   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.155604   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:01.155610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:01.155674   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:01.191076   65605 cri.go:89] found id: ""
	I0723 15:22:01.191102   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.191110   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:01.191116   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:01.191162   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:01.224233   65605 cri.go:89] found id: ""
	I0723 15:22:01.224257   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.224263   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:01.224269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:01.224337   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:01.257321   65605 cri.go:89] found id: ""
	I0723 15:22:01.257344   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.257351   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:01.257357   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:01.257415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:01.289646   65605 cri.go:89] found id: ""
	I0723 15:22:01.289670   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.289678   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:01.289685   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:01.289740   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:01.322672   65605 cri.go:89] found id: ""
	I0723 15:22:01.322703   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.322714   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:01.322725   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:01.322741   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:01.395637   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:01.395674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:01.434548   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:01.434580   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:01.484364   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:01.484396   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:01.497536   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:01.497571   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:01.567570   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:57.132119   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:59.132178   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.134156   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:58.407847   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:00.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.161116   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.658640   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:04.068561   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:04.082660   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:04.082738   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:04.118536   65605 cri.go:89] found id: ""
	I0723 15:22:04.118566   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.118576   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:04.118584   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:04.118642   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:04.154768   65605 cri.go:89] found id: ""
	I0723 15:22:04.154792   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.154802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:04.154809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:04.154854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:04.188426   65605 cri.go:89] found id: ""
	I0723 15:22:04.188456   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.188464   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:04.188469   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:04.188517   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:04.222195   65605 cri.go:89] found id: ""
	I0723 15:22:04.222221   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.222229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:04.222251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:04.222327   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:04.259164   65605 cri.go:89] found id: ""
	I0723 15:22:04.259191   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.259201   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:04.259208   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:04.259275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:04.291500   65605 cri.go:89] found id: ""
	I0723 15:22:04.291527   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.291534   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:04.291541   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:04.291595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:04.326680   65605 cri.go:89] found id: ""
	I0723 15:22:04.326712   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.326722   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:04.326729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:04.326789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:04.358629   65605 cri.go:89] found id: ""
	I0723 15:22:04.358653   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.358662   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:04.358671   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:04.358682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:04.429591   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.429614   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:04.429625   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:04.509841   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:04.509887   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:04.547827   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:04.547852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:04.600857   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:04.600891   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:03.633501   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.633691   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.404413   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.404840   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.405499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:06.153755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:08.653890   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.116541   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:07.129739   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:07.129809   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:07.164541   65605 cri.go:89] found id: ""
	I0723 15:22:07.164573   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.164583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:07.164589   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:07.164651   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:07.202567   65605 cri.go:89] found id: ""
	I0723 15:22:07.202595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.202606   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:07.202613   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:07.202672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:07.238665   65605 cri.go:89] found id: ""
	I0723 15:22:07.238689   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.238698   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:07.238706   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:07.238763   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:07.271216   65605 cri.go:89] found id: ""
	I0723 15:22:07.271246   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.271256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:07.271263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:07.271335   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:07.303566   65605 cri.go:89] found id: ""
	I0723 15:22:07.303595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.303606   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:07.303613   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:07.303672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:07.337927   65605 cri.go:89] found id: ""
	I0723 15:22:07.337951   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.337959   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:07.337965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:07.338023   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:07.373813   65605 cri.go:89] found id: ""
	I0723 15:22:07.373841   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.373852   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:07.373860   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:07.373928   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:07.408301   65605 cri.go:89] found id: ""
	I0723 15:22:07.408326   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.408333   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:07.408340   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:07.408350   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:07.488384   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:07.488417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.531867   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:07.531895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:07.582639   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:07.582671   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.597387   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:07.597413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:07.673185   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.173915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:10.186657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:10.186717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:10.218213   65605 cri.go:89] found id: ""
	I0723 15:22:10.218238   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.218246   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:10.218252   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:10.218302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:10.250199   65605 cri.go:89] found id: ""
	I0723 15:22:10.250228   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.250238   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:10.250245   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:10.250307   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:10.282920   65605 cri.go:89] found id: ""
	I0723 15:22:10.282947   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.282957   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:10.282965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:10.283022   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:10.317334   65605 cri.go:89] found id: ""
	I0723 15:22:10.317363   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.317372   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:10.317380   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:10.317443   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:10.350520   65605 cri.go:89] found id: ""
	I0723 15:22:10.350548   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.350559   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:10.350566   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:10.350630   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:10.381360   65605 cri.go:89] found id: ""
	I0723 15:22:10.381385   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.381392   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:10.381405   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:10.381451   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:10.413202   65605 cri.go:89] found id: ""
	I0723 15:22:10.413231   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.413239   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:10.413244   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:10.413300   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:10.447102   65605 cri.go:89] found id: ""
	I0723 15:22:10.447132   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.447143   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:10.447154   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:10.447168   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:10.496110   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:10.496141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:10.509298   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:10.509331   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:10.578938   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.578960   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:10.578975   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:10.660316   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:10.660346   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.634852   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.635205   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.905326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.906212   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.153941   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.652564   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.199119   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:13.212070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:13.212129   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:13.247646   65605 cri.go:89] found id: ""
	I0723 15:22:13.247683   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.247694   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:13.247701   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:13.247759   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:13.277875   65605 cri.go:89] found id: ""
	I0723 15:22:13.277901   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.277909   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:13.277918   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:13.277973   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:13.311499   65605 cri.go:89] found id: ""
	I0723 15:22:13.311520   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.311527   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:13.311533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:13.311587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:13.342913   65605 cri.go:89] found id: ""
	I0723 15:22:13.342944   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.342955   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:13.342963   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:13.343020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:13.380062   65605 cri.go:89] found id: ""
	I0723 15:22:13.380085   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.380092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:13.380097   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:13.380148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:13.416683   65605 cri.go:89] found id: ""
	I0723 15:22:13.416712   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.416721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:13.416728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:13.416786   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:13.451783   65605 cri.go:89] found id: ""
	I0723 15:22:13.451806   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.451813   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:13.451819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:13.451864   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:13.490456   65605 cri.go:89] found id: ""
	I0723 15:22:13.490488   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.490500   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:13.490512   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:13.490531   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:13.562391   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:13.562419   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:13.562435   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:13.639271   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:13.639330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.677457   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:13.677486   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:13.727877   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:13.727912   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.242569   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:16.255165   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:16.255237   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:16.286884   65605 cri.go:89] found id: ""
	I0723 15:22:16.286973   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.286990   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:16.286998   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:16.287070   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:16.319480   65605 cri.go:89] found id: ""
	I0723 15:22:16.319508   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.319518   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:16.319524   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:16.319590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:16.356142   65605 cri.go:89] found id: ""
	I0723 15:22:16.356176   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.356186   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:16.356193   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:16.356251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:16.393720   65605 cri.go:89] found id: ""
	I0723 15:22:16.393748   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.393756   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:16.393761   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:16.393817   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:16.429752   65605 cri.go:89] found id: ""
	I0723 15:22:16.429788   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.429800   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:16.429807   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:16.429865   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:16.463983   65605 cri.go:89] found id: ""
	I0723 15:22:16.464012   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.464023   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:16.464030   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:16.464099   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:16.497390   65605 cri.go:89] found id: ""
	I0723 15:22:16.497417   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.497428   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:16.497435   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:16.497496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:16.532460   65605 cri.go:89] found id: ""
	I0723 15:22:16.532491   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.532502   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:16.532513   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:16.532525   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:16.584455   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:16.584492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.599205   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:16.599237   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:16.672183   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:16.672207   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:16.672221   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:16.748888   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:16.748923   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:12.132681   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.134314   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.634068   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.404961   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.406911   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:15.652813   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:17.653585   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.654123   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.286407   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:19.300815   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:19.300890   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:19.341088   65605 cri.go:89] found id: ""
	I0723 15:22:19.341122   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.341133   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:19.341140   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:19.341191   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:19.375597   65605 cri.go:89] found id: ""
	I0723 15:22:19.375627   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.375635   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:19.375641   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:19.375689   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:19.412206   65605 cri.go:89] found id: ""
	I0723 15:22:19.412234   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.412244   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:19.412252   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:19.412315   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:19.445598   65605 cri.go:89] found id: ""
	I0723 15:22:19.445631   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.445645   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:19.445653   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:19.445725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:19.477766   65605 cri.go:89] found id: ""
	I0723 15:22:19.477800   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.477811   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:19.477818   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:19.477877   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:19.509935   65605 cri.go:89] found id: ""
	I0723 15:22:19.509965   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.509976   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:19.509982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:19.510039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:19.542906   65605 cri.go:89] found id: ""
	I0723 15:22:19.542936   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.542947   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:19.542954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:19.543010   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:19.575935   65605 cri.go:89] found id: ""
	I0723 15:22:19.575964   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.575975   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:19.576036   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:19.576054   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:19.625640   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:19.625674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:19.638938   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:19.638965   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:19.711019   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:19.711047   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:19.711061   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:19.787744   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:19.787781   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:19.133215   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.632570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:18.905104   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.404733   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.152487   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:24.154220   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.326500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:22.339677   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:22.339741   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:22.374593   65605 cri.go:89] found id: ""
	I0723 15:22:22.374630   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.374641   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:22.374649   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:22.374713   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:22.408064   65605 cri.go:89] found id: ""
	I0723 15:22:22.408089   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.408099   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:22.408106   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:22.408166   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:22.442923   65605 cri.go:89] found id: ""
	I0723 15:22:22.442956   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.442968   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:22.442976   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:22.443038   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:22.476003   65605 cri.go:89] found id: ""
	I0723 15:22:22.476027   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.476036   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:22.476043   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:22.476109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:22.508221   65605 cri.go:89] found id: ""
	I0723 15:22:22.508253   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.508260   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:22.508268   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:22.508328   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:22.540748   65605 cri.go:89] found id: ""
	I0723 15:22:22.540778   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.540789   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:22.540797   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:22.540857   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:22.576000   65605 cri.go:89] found id: ""
	I0723 15:22:22.576028   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.576038   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:22.576044   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:22.576102   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:22.614295   65605 cri.go:89] found id: ""
	I0723 15:22:22.614325   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.614335   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:22.614346   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:22.614361   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:22.627447   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:22.627481   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:22.701142   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:22.701172   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:22.701188   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:22.788487   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:22.788523   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.831107   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:22.831136   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.382886   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:25.396072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:25.396147   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:25.432414   65605 cri.go:89] found id: ""
	I0723 15:22:25.432443   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.432454   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:25.432482   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:25.432554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:25.466375   65605 cri.go:89] found id: ""
	I0723 15:22:25.466421   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.466429   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:25.466434   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:25.466488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:25.502512   65605 cri.go:89] found id: ""
	I0723 15:22:25.502536   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.502545   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:25.502553   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:25.502624   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:25.535953   65605 cri.go:89] found id: ""
	I0723 15:22:25.535975   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.535984   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:25.535991   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:25.536051   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:25.569217   65605 cri.go:89] found id: ""
	I0723 15:22:25.569250   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.569261   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:25.569269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:25.569331   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:25.602317   65605 cri.go:89] found id: ""
	I0723 15:22:25.602341   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.602350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:25.602360   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:25.602433   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:25.636959   65605 cri.go:89] found id: ""
	I0723 15:22:25.636984   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.636994   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:25.637001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:25.637059   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:25.671719   65605 cri.go:89] found id: ""
	I0723 15:22:25.671753   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.671764   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:25.671775   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:25.671789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.720509   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:25.720540   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:25.733097   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:25.733121   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:25.809365   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:25.809393   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:25.809409   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:25.890663   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:25.890700   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:23.634537   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.133073   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:23.905075   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:25.905102   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:27.905390   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.653893   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.660981   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.430884   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:28.444825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:28.444882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:28.477510   65605 cri.go:89] found id: ""
	I0723 15:22:28.477533   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.477540   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:28.477546   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:28.477611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:28.515395   65605 cri.go:89] found id: ""
	I0723 15:22:28.515424   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.515434   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:28.515440   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:28.515498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:28.554144   65605 cri.go:89] found id: ""
	I0723 15:22:28.554169   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.554176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:28.554185   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:28.554239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:28.588756   65605 cri.go:89] found id: ""
	I0723 15:22:28.588783   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.588794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:28.588801   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:28.588861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:28.623278   65605 cri.go:89] found id: ""
	I0723 15:22:28.623305   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.623313   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:28.623318   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:28.623372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:28.666802   65605 cri.go:89] found id: ""
	I0723 15:22:28.666831   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.666840   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:28.666847   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:28.666906   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:28.697712   65605 cri.go:89] found id: ""
	I0723 15:22:28.697736   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.697744   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:28.697749   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:28.697803   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:28.730296   65605 cri.go:89] found id: ""
	I0723 15:22:28.730333   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.730340   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:28.730349   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:28.730360   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.779381   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:28.779417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:28.792687   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:28.792718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:28.859483   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:28.859508   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:28.859537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:28.933792   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:28.933824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.474653   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:31.488537   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:31.488602   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:31.522785   65605 cri.go:89] found id: ""
	I0723 15:22:31.522816   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.522826   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:31.522834   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:31.522901   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:31.554448   65605 cri.go:89] found id: ""
	I0723 15:22:31.554493   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.554503   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:31.554508   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:31.554568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:31.587456   65605 cri.go:89] found id: ""
	I0723 15:22:31.587479   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.587486   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:31.587492   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:31.587549   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:31.625604   65605 cri.go:89] found id: ""
	I0723 15:22:31.625632   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.625640   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:31.625646   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:31.625696   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:31.661266   65605 cri.go:89] found id: ""
	I0723 15:22:31.661298   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.661304   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:31.661309   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:31.661364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:31.696942   65605 cri.go:89] found id: ""
	I0723 15:22:31.696974   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.696984   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:31.696992   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:31.697055   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:31.730706   65605 cri.go:89] found id: ""
	I0723 15:22:31.730730   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.730738   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:31.730743   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:31.730789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:31.762778   65605 cri.go:89] found id: ""
	I0723 15:22:31.762802   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.762810   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:31.762818   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:31.762829   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.804789   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:31.804814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:30.133732   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:29.906482   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:32.404579   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.152594   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:33.154059   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.854481   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:31.854514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:31.867003   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:31.867028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:31.942544   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:31.942565   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:31.942576   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.519437   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:34.531879   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:34.531941   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:34.565547   65605 cri.go:89] found id: ""
	I0723 15:22:34.565572   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.565580   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:34.565585   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:34.565634   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:34.597865   65605 cri.go:89] found id: ""
	I0723 15:22:34.597892   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.597902   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:34.597908   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:34.597968   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:34.633153   65605 cri.go:89] found id: ""
	I0723 15:22:34.633176   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.633185   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:34.633192   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:34.633251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:34.668464   65605 cri.go:89] found id: ""
	I0723 15:22:34.668486   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.668496   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:34.668502   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:34.668573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:34.700358   65605 cri.go:89] found id: ""
	I0723 15:22:34.700401   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.700412   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:34.700422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:34.700495   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:34.731774   65605 cri.go:89] found id: ""
	I0723 15:22:34.731807   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.731819   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:34.731828   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:34.731902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:34.764204   65605 cri.go:89] found id: ""
	I0723 15:22:34.764232   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.764243   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:34.764251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:34.764311   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:34.794103   65605 cri.go:89] found id: ""
	I0723 15:22:34.794131   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.794139   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:34.794149   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:34.794165   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:34.868038   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:34.868063   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:34.868076   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.958254   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:34.958291   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:35.004649   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:35.004681   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:35.055496   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:35.055537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:32.632017   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.634515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.405341   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:36.905094   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:35.652935   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.654130   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:40.153533   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.569938   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:37.582561   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:37.582629   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:37.613053   65605 cri.go:89] found id: ""
	I0723 15:22:37.613081   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.613090   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:37.613096   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:37.613161   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:37.649239   65605 cri.go:89] found id: ""
	I0723 15:22:37.649270   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.649279   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:37.649286   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:37.649372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:37.685110   65605 cri.go:89] found id: ""
	I0723 15:22:37.685137   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.685145   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:37.685150   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:37.685201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:37.718210   65605 cri.go:89] found id: ""
	I0723 15:22:37.718231   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.718239   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:37.718245   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:37.718297   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:37.751192   65605 cri.go:89] found id: ""
	I0723 15:22:37.751224   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.751234   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:37.751241   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:37.751294   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:37.781569   65605 cri.go:89] found id: ""
	I0723 15:22:37.781597   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.781607   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:37.781614   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:37.781680   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:37.812886   65605 cri.go:89] found id: ""
	I0723 15:22:37.812916   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.812927   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:37.812934   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:37.812994   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:37.844065   65605 cri.go:89] found id: ""
	I0723 15:22:37.844094   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.844104   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:37.844114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:37.844128   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.857216   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:37.857244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:37.926781   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:37.926807   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:37.926824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:38.007510   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:38.007544   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:38.045404   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:38.045437   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:40.594590   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:40.607099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:40.607157   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:40.660888   65605 cri.go:89] found id: ""
	I0723 15:22:40.660915   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.660926   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:40.660933   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:40.660992   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:40.698276   65605 cri.go:89] found id: ""
	I0723 15:22:40.698302   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.698310   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:40.698317   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:40.698411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:40.733515   65605 cri.go:89] found id: ""
	I0723 15:22:40.733542   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.733552   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:40.733560   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:40.733619   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:40.765501   65605 cri.go:89] found id: ""
	I0723 15:22:40.765530   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.765541   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:40.765548   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:40.765600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:40.800660   65605 cri.go:89] found id: ""
	I0723 15:22:40.800686   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.800693   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:40.800698   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:40.800744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:40.836084   65605 cri.go:89] found id: ""
	I0723 15:22:40.836111   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.836119   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:40.836125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:40.836179   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:40.872567   65605 cri.go:89] found id: ""
	I0723 15:22:40.872593   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.872601   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:40.872607   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:40.872665   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:40.907965   65605 cri.go:89] found id: ""
	I0723 15:22:40.907995   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.908006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:40.908017   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:40.908032   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:40.977078   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:40.977105   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:40.977124   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:41.059589   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:41.059634   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:41.097934   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:41.097968   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:41.151322   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:41.151365   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.133207   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.133345   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.633631   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.407087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.904675   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:42.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:44.653650   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.665956   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:43.678808   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:43.678882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:43.711311   65605 cri.go:89] found id: ""
	I0723 15:22:43.711346   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.711356   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:43.711363   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:43.711415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:43.745203   65605 cri.go:89] found id: ""
	I0723 15:22:43.745226   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.745233   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:43.745239   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:43.745303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:43.778815   65605 cri.go:89] found id: ""
	I0723 15:22:43.778851   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.778861   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:43.778868   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:43.778926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:43.812497   65605 cri.go:89] found id: ""
	I0723 15:22:43.812528   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.812538   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:43.812544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:43.812595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:43.849568   65605 cri.go:89] found id: ""
	I0723 15:22:43.849595   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.849607   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:43.849621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:43.849784   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:43.883486   65605 cri.go:89] found id: ""
	I0723 15:22:43.883515   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.883527   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:43.883535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:43.883603   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:43.917301   65605 cri.go:89] found id: ""
	I0723 15:22:43.917321   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.917328   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:43.917333   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:43.917388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:43.951808   65605 cri.go:89] found id: ""
	I0723 15:22:43.951835   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.951844   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:43.951853   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:43.951864   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:44.001416   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:44.001448   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:44.014680   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:44.014708   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:44.086008   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:44.086033   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:44.086048   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:44.174647   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:44.174679   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:46.716916   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:46.730403   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:46.730473   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:46.765297   65605 cri.go:89] found id: ""
	I0723 15:22:46.765332   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.765348   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:46.765355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:46.765417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:46.798193   65605 cri.go:89] found id: ""
	I0723 15:22:46.798225   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.798235   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:46.798242   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:46.798309   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:46.830977   65605 cri.go:89] found id: ""
	I0723 15:22:46.831003   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.831015   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:46.831022   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:46.831093   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:44.135515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.633440   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.404399   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.655329   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.660172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.867414   65605 cri.go:89] found id: ""
	I0723 15:22:46.867441   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.867452   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:46.867459   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:46.867524   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:46.903782   65605 cri.go:89] found id: ""
	I0723 15:22:46.903810   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.903823   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:46.903830   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:46.903912   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:46.936451   65605 cri.go:89] found id: ""
	I0723 15:22:46.936479   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.936486   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:46.936491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:46.936538   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:46.970263   65605 cri.go:89] found id: ""
	I0723 15:22:46.970289   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.970297   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:46.970302   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:46.970370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:47.005023   65605 cri.go:89] found id: ""
	I0723 15:22:47.005055   65605 logs.go:276] 0 containers: []
	W0723 15:22:47.005065   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:47.005074   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:47.005087   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:47.102350   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:47.102398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:47.102432   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:47.194243   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:47.194277   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:47.235510   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:47.235543   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:47.285177   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:47.285208   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:49.799825   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:49.813159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:49.813218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:49.844937   65605 cri.go:89] found id: ""
	I0723 15:22:49.844966   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.844974   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:49.844979   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:49.845039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:49.880236   65605 cri.go:89] found id: ""
	I0723 15:22:49.880265   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.880276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:49.880283   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:49.880344   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:49.914260   65605 cri.go:89] found id: ""
	I0723 15:22:49.914289   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.914298   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:49.914306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:49.914360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:49.948948   65605 cri.go:89] found id: ""
	I0723 15:22:49.948979   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.948987   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:49.948994   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:49.949049   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:49.982841   65605 cri.go:89] found id: ""
	I0723 15:22:49.982867   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.982876   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:49.982881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:49.982926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:50.018255   65605 cri.go:89] found id: ""
	I0723 15:22:50.018286   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.018297   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:50.018315   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:50.018366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:50.054476   65605 cri.go:89] found id: ""
	I0723 15:22:50.054505   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.054515   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:50.054521   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:50.054582   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:50.088017   65605 cri.go:89] found id: ""
	I0723 15:22:50.088050   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.088060   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:50.088072   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:50.088086   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:50.140460   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:50.140494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:50.155334   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:50.155371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:50.230361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:50.230401   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:50.230419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:50.307742   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:50.307789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:48.635238   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.133390   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.406535   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:50.904921   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.905910   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.152686   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:53.153547   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.847520   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:52.868334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:52.868400   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:52.905903   65605 cri.go:89] found id: ""
	I0723 15:22:52.905930   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.905941   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:52.905948   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:52.906006   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:52.940644   65605 cri.go:89] found id: ""
	I0723 15:22:52.940672   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.940683   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:52.940690   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:52.940752   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:52.973581   65605 cri.go:89] found id: ""
	I0723 15:22:52.973607   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.973615   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:52.973621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:52.973682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:53.007004   65605 cri.go:89] found id: ""
	I0723 15:22:53.007032   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.007040   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:53.007046   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:53.007100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:53.040346   65605 cri.go:89] found id: ""
	I0723 15:22:53.040374   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.040385   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:53.040392   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:53.040455   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:53.073620   65605 cri.go:89] found id: ""
	I0723 15:22:53.073653   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.073662   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:53.073668   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:53.073717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:53.108895   65605 cri.go:89] found id: ""
	I0723 15:22:53.108929   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.108941   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:53.108949   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:53.109014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:53.144145   65605 cri.go:89] found id: ""
	I0723 15:22:53.144171   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.144179   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:53.144190   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:53.144207   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:53.181580   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:53.181617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:53.235261   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:53.235292   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:53.249317   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:53.249352   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:53.317382   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:53.317403   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:53.317419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:55.899766   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:55.913612   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:55.913685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:55.945832   65605 cri.go:89] found id: ""
	I0723 15:22:55.945865   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.945877   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:55.945884   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:55.945939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:55.977489   65605 cri.go:89] found id: ""
	I0723 15:22:55.977522   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.977533   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:55.977546   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:55.977607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:56.011727   65605 cri.go:89] found id: ""
	I0723 15:22:56.011758   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.011770   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:56.011781   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:56.011850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:56.044046   65605 cri.go:89] found id: ""
	I0723 15:22:56.044076   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.044086   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:56.044093   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:56.044148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:56.078615   65605 cri.go:89] found id: ""
	I0723 15:22:56.078638   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.078644   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:56.078649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:56.078702   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:56.112720   65605 cri.go:89] found id: ""
	I0723 15:22:56.112746   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.112754   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:56.112759   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:56.112807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:56.146436   65605 cri.go:89] found id: ""
	I0723 15:22:56.146464   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.146475   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:56.146483   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:56.146545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:56.179819   65605 cri.go:89] found id: ""
	I0723 15:22:56.179850   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.179859   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:56.179868   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:56.179885   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:56.219608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:56.219636   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:56.268158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:56.268192   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:56.281422   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:56.281449   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:56.351169   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:56.351190   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:56.351206   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:53.133444   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.632360   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.404787   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.905423   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.652504   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.655049   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:58.933585   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:58.946516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:58.946607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:58.980970   65605 cri.go:89] found id: ""
	I0723 15:22:58.980994   65605 logs.go:276] 0 containers: []
	W0723 15:22:58.981004   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:58.981012   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:58.981083   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:59.019301   65605 cri.go:89] found id: ""
	I0723 15:22:59.019337   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.019352   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:59.019360   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:59.019417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:59.053653   65605 cri.go:89] found id: ""
	I0723 15:22:59.053677   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.053685   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:59.053690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:59.053745   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:59.086737   65605 cri.go:89] found id: ""
	I0723 15:22:59.086764   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.086772   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:59.086778   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:59.086833   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:59.120689   65605 cri.go:89] found id: ""
	I0723 15:22:59.120717   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.120725   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:59.120731   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:59.120793   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:59.157267   65605 cri.go:89] found id: ""
	I0723 15:22:59.157305   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.157313   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:59.157319   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:59.157370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:59.193432   65605 cri.go:89] found id: ""
	I0723 15:22:59.193457   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.193468   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:59.193474   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:59.193518   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:59.227501   65605 cri.go:89] found id: ""
	I0723 15:22:59.227528   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.227535   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:59.227544   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:59.227555   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:59.314420   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:59.314465   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:59.354311   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:59.354354   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:59.406158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:59.406189   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:59.419244   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:59.419270   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:59.494399   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:57.632469   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:00.133084   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.905483   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.406340   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.154105   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.655454   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:01.995403   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:02.008395   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:02.008459   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:02.041952   65605 cri.go:89] found id: ""
	I0723 15:23:02.041979   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.041989   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:02.041995   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:02.042061   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:02.079353   65605 cri.go:89] found id: ""
	I0723 15:23:02.079383   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.079390   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:02.079397   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:02.079453   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:02.114222   65605 cri.go:89] found id: ""
	I0723 15:23:02.114251   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.114261   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:02.114269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:02.114350   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:02.146563   65605 cri.go:89] found id: ""
	I0723 15:23:02.146591   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.146603   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:02.146610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:02.146675   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:02.184401   65605 cri.go:89] found id: ""
	I0723 15:23:02.184428   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.184436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:02.184442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:02.184489   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:02.221304   65605 cri.go:89] found id: ""
	I0723 15:23:02.221339   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.221350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:02.221358   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:02.221424   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:02.266255   65605 cri.go:89] found id: ""
	I0723 15:23:02.266280   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.266288   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:02.266308   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:02.266364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:02.302038   65605 cri.go:89] found id: ""
	I0723 15:23:02.302064   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.302075   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:02.302085   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:02.302102   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.352709   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:02.352743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:02.366113   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:02.366141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:02.433621   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:02.433658   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:02.433674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:02.512443   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:02.512479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.051227   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:05.063634   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:05.063704   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:05.099833   65605 cri.go:89] found id: ""
	I0723 15:23:05.099862   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.099872   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:05.099880   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:05.099942   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:05.136009   65605 cri.go:89] found id: ""
	I0723 15:23:05.136030   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.136036   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:05.136042   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:05.136089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:05.171390   65605 cri.go:89] found id: ""
	I0723 15:23:05.171423   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.171434   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:05.171441   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:05.171497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:05.210193   65605 cri.go:89] found id: ""
	I0723 15:23:05.210220   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.210229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:05.210236   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:05.210318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:05.243266   65605 cri.go:89] found id: ""
	I0723 15:23:05.243290   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.243298   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:05.243304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:05.243368   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:05.273795   65605 cri.go:89] found id: ""
	I0723 15:23:05.273826   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.273835   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:05.273842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:05.273918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:05.305498   65605 cri.go:89] found id: ""
	I0723 15:23:05.305521   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.305528   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:05.305533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:05.305587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:05.337867   65605 cri.go:89] found id: ""
	I0723 15:23:05.337894   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.337905   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:05.337917   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:05.337934   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:05.353531   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:05.353564   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:05.419605   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:05.419630   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:05.419644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:05.503361   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:05.503395   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.539514   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:05.539547   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.633357   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.633516   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.904960   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.913789   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.657437   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.660064   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.091151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:08.103930   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:08.104007   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:08.136853   65605 cri.go:89] found id: ""
	I0723 15:23:08.136874   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.136881   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:08.136887   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:08.136940   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:08.171525   65605 cri.go:89] found id: ""
	I0723 15:23:08.171556   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.171577   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:08.171584   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:08.171652   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:08.205887   65605 cri.go:89] found id: ""
	I0723 15:23:08.205919   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.205930   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:08.205940   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:08.206001   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:08.238304   65605 cri.go:89] found id: ""
	I0723 15:23:08.238329   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.238337   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:08.238342   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:08.238411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:08.270162   65605 cri.go:89] found id: ""
	I0723 15:23:08.270194   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.270203   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:08.270211   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:08.270273   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:08.312963   65605 cri.go:89] found id: ""
	I0723 15:23:08.312991   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.312999   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:08.313005   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:08.313065   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:08.345211   65605 cri.go:89] found id: ""
	I0723 15:23:08.345246   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.345258   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:08.345267   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:08.345326   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:08.381355   65605 cri.go:89] found id: ""
	I0723 15:23:08.381390   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.381399   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:08.381409   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:08.381421   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.436680   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:08.436718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:08.450210   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:08.450245   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:08.517469   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:08.517490   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:08.517504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:08.603147   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:08.603185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:11.142363   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:11.158204   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:11.158278   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:11.197181   65605 cri.go:89] found id: ""
	I0723 15:23:11.197211   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.197227   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:11.197234   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:11.197302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:11.232698   65605 cri.go:89] found id: ""
	I0723 15:23:11.232726   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.232736   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:11.232742   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:11.232801   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:11.263268   65605 cri.go:89] found id: ""
	I0723 15:23:11.263293   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.263301   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:11.263306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:11.263363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:11.294213   65605 cri.go:89] found id: ""
	I0723 15:23:11.294242   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.294254   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:11.294261   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:11.294340   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:11.324721   65605 cri.go:89] found id: ""
	I0723 15:23:11.324753   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.324766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:11.324773   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:11.324834   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:11.356563   65605 cri.go:89] found id: ""
	I0723 15:23:11.356595   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.356606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:11.356620   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:11.356685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:11.387818   65605 cri.go:89] found id: ""
	I0723 15:23:11.387850   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.387859   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:11.387866   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:11.387926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:11.422612   65605 cri.go:89] found id: ""
	I0723 15:23:11.422639   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.422649   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:11.422659   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:11.422672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:11.475997   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:11.476028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:11.489064   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:11.489095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:11.557384   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:11.557408   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:11.557427   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:11.636906   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:11.636933   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:07.134834   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.636699   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.405125   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.406702   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.153281   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.153390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.154674   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.176790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:14.190898   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:14.190972   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:14.225264   65605 cri.go:89] found id: ""
	I0723 15:23:14.225297   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.225308   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:14.225314   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:14.225378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:14.257092   65605 cri.go:89] found id: ""
	I0723 15:23:14.257119   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.257132   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:14.257138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:14.257201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:14.291068   65605 cri.go:89] found id: ""
	I0723 15:23:14.291095   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.291104   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:14.291111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:14.291170   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:14.324840   65605 cri.go:89] found id: ""
	I0723 15:23:14.324872   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.324881   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:14.324888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:14.324948   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:14.358228   65605 cri.go:89] found id: ""
	I0723 15:23:14.358258   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.358268   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:14.358275   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:14.358333   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:14.389136   65605 cri.go:89] found id: ""
	I0723 15:23:14.389164   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.389174   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:14.389181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:14.389241   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:14.424386   65605 cri.go:89] found id: ""
	I0723 15:23:14.424413   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.424424   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:14.424432   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:14.424492   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:14.457206   65605 cri.go:89] found id: ""
	I0723 15:23:14.457234   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.457244   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:14.457254   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:14.457265   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:14.535708   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:14.535742   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.573579   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:14.573603   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:14.627966   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:14.627994   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:14.641305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:14.641332   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:14.723499   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:12.133966   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.633521   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:16.633785   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.905045   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.653465   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:19.653755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.224268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:17.236467   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:17.236530   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:17.269668   65605 cri.go:89] found id: ""
	I0723 15:23:17.269697   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.269704   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:17.269709   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:17.269753   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:17.300573   65605 cri.go:89] found id: ""
	I0723 15:23:17.300596   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.300603   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:17.300608   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:17.300655   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:17.332627   65605 cri.go:89] found id: ""
	I0723 15:23:17.332653   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.332661   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:17.332666   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:17.332716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:17.363759   65605 cri.go:89] found id: ""
	I0723 15:23:17.363786   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.363794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:17.363799   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:17.363854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:17.396986   65605 cri.go:89] found id: ""
	I0723 15:23:17.397016   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.397023   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:17.397031   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:17.397089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:17.435454   65605 cri.go:89] found id: ""
	I0723 15:23:17.435478   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.435488   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:17.435495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:17.435551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:17.469529   65605 cri.go:89] found id: ""
	I0723 15:23:17.469570   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.469581   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:17.469589   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:17.469654   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:17.505356   65605 cri.go:89] found id: ""
	I0723 15:23:17.505384   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.505395   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:17.505405   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:17.505420   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:17.548656   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:17.548682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:17.602439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:17.602471   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:17.614872   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:17.614902   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:17.684914   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.684939   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:17.684958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.271384   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:20.284619   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:20.284682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:20.319522   65605 cri.go:89] found id: ""
	I0723 15:23:20.319545   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.319552   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:20.319557   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:20.319608   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:20.357359   65605 cri.go:89] found id: ""
	I0723 15:23:20.357385   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.357393   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:20.357399   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:20.357444   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:20.390651   65605 cri.go:89] found id: ""
	I0723 15:23:20.390680   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.390692   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:20.390699   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:20.390757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:20.425243   65605 cri.go:89] found id: ""
	I0723 15:23:20.425274   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.425288   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:20.425295   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:20.425367   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:20.459665   65605 cri.go:89] found id: ""
	I0723 15:23:20.459687   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.459694   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:20.459700   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:20.459749   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:20.494836   65605 cri.go:89] found id: ""
	I0723 15:23:20.494869   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.494879   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:20.494887   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:20.494946   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:20.528807   65605 cri.go:89] found id: ""
	I0723 15:23:20.528839   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.528847   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:20.528854   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:20.528904   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:20.563111   65605 cri.go:89] found id: ""
	I0723 15:23:20.563139   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.563148   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:20.563160   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:20.563175   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:20.576259   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:20.576290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:20.641528   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:20.641551   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:20.641565   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.717413   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:20.717452   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:20.756832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:20.756858   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:19.133570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:21.133680   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:18.404406   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:20.405712   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.904785   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.153273   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:24.654959   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:23.308839   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:23.322122   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:23.322203   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:23.353454   65605 cri.go:89] found id: ""
	I0723 15:23:23.353483   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.353491   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:23.353496   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:23.353550   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:23.385194   65605 cri.go:89] found id: ""
	I0723 15:23:23.385218   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.385226   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:23.385231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:23.385286   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:23.420259   65605 cri.go:89] found id: ""
	I0723 15:23:23.420287   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.420295   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:23.420301   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:23.420366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:23.453107   65605 cri.go:89] found id: ""
	I0723 15:23:23.453134   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.453145   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:23.453152   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:23.453208   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:23.485147   65605 cri.go:89] found id: ""
	I0723 15:23:23.485178   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.485185   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:23.485191   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:23.485239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:23.516682   65605 cri.go:89] found id: ""
	I0723 15:23:23.516709   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.516721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:23.516729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:23.516855   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:23.552804   65605 cri.go:89] found id: ""
	I0723 15:23:23.552836   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.552846   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:23.552853   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:23.552916   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:23.585951   65605 cri.go:89] found id: ""
	I0723 15:23:23.585977   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.585988   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:23.586000   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:23.586014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.641439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:23.641469   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:23.655213   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:23.655243   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:23.726461   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:23.726482   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:23.726496   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:23.806530   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:23.806572   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.346727   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:26.359785   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:26.359854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:26.394547   65605 cri.go:89] found id: ""
	I0723 15:23:26.394583   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.394593   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:26.394600   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:26.394660   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:26.429602   65605 cri.go:89] found id: ""
	I0723 15:23:26.429632   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.429640   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:26.429646   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:26.429735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:26.461875   65605 cri.go:89] found id: ""
	I0723 15:23:26.461902   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.461909   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:26.461916   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:26.461987   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:26.494721   65605 cri.go:89] found id: ""
	I0723 15:23:26.494743   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.494751   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:26.494756   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:26.494802   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:26.530828   65605 cri.go:89] found id: ""
	I0723 15:23:26.530854   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.530863   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:26.530871   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:26.530939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:26.564508   65605 cri.go:89] found id: ""
	I0723 15:23:26.564540   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.564551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:26.564558   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:26.564618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:26.599354   65605 cri.go:89] found id: ""
	I0723 15:23:26.599378   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.599387   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:26.599393   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:26.599460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:26.654360   65605 cri.go:89] found id: ""
	I0723 15:23:26.654409   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.654420   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:26.654429   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:26.654446   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:26.722180   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:26.722212   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:26.722226   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:26.803291   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:26.803324   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.842829   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:26.842860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.633887   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:25.406139   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:27.905699   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.656334   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:29.153898   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.896814   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:26.896854   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.411463   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:29.424509   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:29.424574   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:29.458014   65605 cri.go:89] found id: ""
	I0723 15:23:29.458042   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.458049   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:29.458055   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:29.458108   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:29.492762   65605 cri.go:89] found id: ""
	I0723 15:23:29.492792   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.492802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:29.492809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:29.492862   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:29.526807   65605 cri.go:89] found id: ""
	I0723 15:23:29.526840   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.526851   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:29.526858   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:29.526922   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:29.560110   65605 cri.go:89] found id: ""
	I0723 15:23:29.560133   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.560140   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:29.560146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:29.560195   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:29.596287   65605 cri.go:89] found id: ""
	I0723 15:23:29.596317   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.596327   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:29.596334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:29.596389   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:29.629292   65605 cri.go:89] found id: ""
	I0723 15:23:29.629338   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.629345   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:29.629353   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:29.629404   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:29.666018   65605 cri.go:89] found id: ""
	I0723 15:23:29.666048   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.666058   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:29.666065   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:29.666131   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:29.699967   65605 cri.go:89] found id: ""
	I0723 15:23:29.699996   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.700006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:29.700018   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:29.700034   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:29.749759   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:29.749792   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.763116   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:29.763142   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:29.836309   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:29.836332   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:29.836343   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:29.916337   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:29.916371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:28.633677   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.132726   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:30.405168   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.905063   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.653297   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:33.653432   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.463927   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:32.477072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:32.477150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:32.509915   65605 cri.go:89] found id: ""
	I0723 15:23:32.509938   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.509945   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:32.509952   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:32.510000   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:32.543302   65605 cri.go:89] found id: ""
	I0723 15:23:32.543344   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.543360   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:32.543368   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:32.543438   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:32.579516   65605 cri.go:89] found id: ""
	I0723 15:23:32.579544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.579555   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:32.579562   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:32.579621   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:32.613175   65605 cri.go:89] found id: ""
	I0723 15:23:32.613210   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.613218   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:32.613224   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:32.613282   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:32.646801   65605 cri.go:89] found id: ""
	I0723 15:23:32.646826   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.646835   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:32.646842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:32.646902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:32.683518   65605 cri.go:89] found id: ""
	I0723 15:23:32.683544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.683551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:32.683556   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:32.683611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:32.719448   65605 cri.go:89] found id: ""
	I0723 15:23:32.719475   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.719485   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:32.719490   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:32.719568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:32.752706   65605 cri.go:89] found id: ""
	I0723 15:23:32.752731   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.752738   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:32.752747   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:32.752757   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.800191   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:32.800220   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:32.850990   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:32.851025   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:32.863700   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:32.863729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:32.928054   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:32.928080   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:32.928095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:35.507452   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:35.520681   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:35.520760   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:35.554642   65605 cri.go:89] found id: ""
	I0723 15:23:35.554668   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.554680   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:35.554687   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:35.554750   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:35.585970   65605 cri.go:89] found id: ""
	I0723 15:23:35.585994   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.586004   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:35.586011   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:35.586069   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:35.625178   65605 cri.go:89] found id: ""
	I0723 15:23:35.625202   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.625212   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:35.625226   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:35.625274   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:35.658618   65605 cri.go:89] found id: ""
	I0723 15:23:35.658647   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.658666   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:35.658682   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:35.658742   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:35.696724   65605 cri.go:89] found id: ""
	I0723 15:23:35.696760   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.696768   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:35.696774   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:35.696825   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:35.728399   65605 cri.go:89] found id: ""
	I0723 15:23:35.728426   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.728435   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:35.728440   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:35.728496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:35.758374   65605 cri.go:89] found id: ""
	I0723 15:23:35.758419   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.758429   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:35.758436   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:35.758497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:35.789013   65605 cri.go:89] found id: ""
	I0723 15:23:35.789041   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.789050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:35.789058   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:35.789069   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:35.843703   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:35.843739   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:35.856489   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:35.856514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:35.926784   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:35.926804   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:35.926819   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:36.009552   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:36.009591   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:33.632247   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.633037   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.404984   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:37.905720   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.653742   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.154008   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.545830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:38.560412   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:38.560491   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:38.596495   65605 cri.go:89] found id: ""
	I0723 15:23:38.596521   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.596532   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:38.596538   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:38.596587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:38.635068   65605 cri.go:89] found id: ""
	I0723 15:23:38.635095   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.635104   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:38.635109   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:38.635180   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:38.675832   65605 cri.go:89] found id: ""
	I0723 15:23:38.675876   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.675891   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:38.675897   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:38.675956   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:38.711052   65605 cri.go:89] found id: ""
	I0723 15:23:38.711080   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.711100   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:38.711108   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:38.711171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:38.749437   65605 cri.go:89] found id: ""
	I0723 15:23:38.749479   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.749490   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:38.749498   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:38.749554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:38.790721   65605 cri.go:89] found id: ""
	I0723 15:23:38.790743   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.790751   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:38.790758   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:38.790818   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:38.840127   65605 cri.go:89] found id: ""
	I0723 15:23:38.840156   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.840167   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:38.840174   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:38.840233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:38.895252   65605 cri.go:89] found id: ""
	I0723 15:23:38.895281   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.895291   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:38.895301   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:38.895317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.933441   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:38.933479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:38.987128   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:38.987160   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:39.001547   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:39.001578   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:39.070363   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:39.070398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:39.070413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:41.648668   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:41.664247   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:41.664303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:41.697926   65605 cri.go:89] found id: ""
	I0723 15:23:41.697954   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.697962   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:41.697967   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:41.698014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:41.735306   65605 cri.go:89] found id: ""
	I0723 15:23:41.735336   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.735347   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:41.735355   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:41.735413   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:41.773005   65605 cri.go:89] found id: ""
	I0723 15:23:41.773030   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.773040   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:41.773047   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:41.773105   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:41.806683   65605 cri.go:89] found id: ""
	I0723 15:23:41.806711   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.806722   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:41.806729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:41.806779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:41.842021   65605 cri.go:89] found id: ""
	I0723 15:23:41.842047   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.842063   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:41.842070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:41.842130   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:37.633918   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.132895   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:39.906489   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.405244   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.652778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.656127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:45.155065   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:41.874772   65605 cri.go:89] found id: ""
	I0723 15:23:41.874802   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.874812   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:41.874819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:41.874883   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:41.908618   65605 cri.go:89] found id: ""
	I0723 15:23:41.908643   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.908651   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:41.908656   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:41.908705   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:41.942529   65605 cri.go:89] found id: ""
	I0723 15:23:41.942562   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.942573   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:41.942586   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:41.942601   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:41.995763   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:41.995820   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:42.009263   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:42.009290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:42.076948   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:42.076970   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:42.076989   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:42.157399   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:42.157442   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:44.699439   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:44.712779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:44.712850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:44.746666   65605 cri.go:89] found id: ""
	I0723 15:23:44.746692   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.746701   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:44.746713   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:44.746775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:44.780144   65605 cri.go:89] found id: ""
	I0723 15:23:44.780171   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.780178   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:44.780184   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:44.780240   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:44.816646   65605 cri.go:89] found id: ""
	I0723 15:23:44.816676   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.816688   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:44.816696   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:44.816830   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:44.848830   65605 cri.go:89] found id: ""
	I0723 15:23:44.848860   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.848873   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:44.848880   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:44.848945   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:44.882216   65605 cri.go:89] found id: ""
	I0723 15:23:44.882252   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.882265   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:44.882274   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:44.882363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:44.915894   65605 cri.go:89] found id: ""
	I0723 15:23:44.915921   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.915930   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:44.915937   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:44.916003   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:44.948902   65605 cri.go:89] found id: ""
	I0723 15:23:44.948936   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.948954   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:44.948964   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:44.949034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:44.981658   65605 cri.go:89] found id: ""
	I0723 15:23:44.981685   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.981698   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:44.981709   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:44.981724   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:45.034030   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:45.034063   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:45.047545   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:45.047577   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:45.113885   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:45.113905   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:45.113917   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:45.195865   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:45.195907   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:42.133464   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.633278   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.633730   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.406233   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.904918   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.156318   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:49.653208   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.740466   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:47.752890   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:47.752958   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:47.786124   65605 cri.go:89] found id: ""
	I0723 15:23:47.786149   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.786157   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:47.786162   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:47.786211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:47.818051   65605 cri.go:89] found id: ""
	I0723 15:23:47.818073   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.818081   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:47.818086   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:47.818134   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:47.854144   65605 cri.go:89] found id: ""
	I0723 15:23:47.854168   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.854176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:47.854181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:47.854226   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:47.885781   65605 cri.go:89] found id: ""
	I0723 15:23:47.885809   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.885819   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:47.885826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:47.885888   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:47.917809   65605 cri.go:89] found id: ""
	I0723 15:23:47.917840   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.917850   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:47.917857   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:47.917921   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:47.950041   65605 cri.go:89] found id: ""
	I0723 15:23:47.950069   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.950078   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:47.950085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:47.950145   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:47.983108   65605 cri.go:89] found id: ""
	I0723 15:23:47.983143   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.983154   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:47.983163   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:47.983232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:48.014560   65605 cri.go:89] found id: ""
	I0723 15:23:48.014604   65605 logs.go:276] 0 containers: []
	W0723 15:23:48.014612   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:48.014621   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:48.014638   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:48.027469   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:48.027494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:48.097571   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:48.097601   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:48.097615   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:48.178586   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:48.178618   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:48.215769   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:48.215794   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:50.768087   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:50.781396   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:50.781467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:50.817297   65605 cri.go:89] found id: ""
	I0723 15:23:50.817327   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.817335   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:50.817341   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:50.817388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:50.850439   65605 cri.go:89] found id: ""
	I0723 15:23:50.850467   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.850476   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:50.850483   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:50.850552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:50.884601   65605 cri.go:89] found id: ""
	I0723 15:23:50.884630   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.884641   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:50.884649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:50.884714   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:50.918971   65605 cri.go:89] found id: ""
	I0723 15:23:50.918996   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.919004   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:50.919010   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:50.919072   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:50.951244   65605 cri.go:89] found id: ""
	I0723 15:23:50.951277   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.951284   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:50.951290   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:50.951360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:50.983289   65605 cri.go:89] found id: ""
	I0723 15:23:50.983326   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.983334   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:50.983339   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:50.983392   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:51.019584   65605 cri.go:89] found id: ""
	I0723 15:23:51.019614   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.019624   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:51.019631   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:51.019693   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:51.050981   65605 cri.go:89] found id: ""
	I0723 15:23:51.051005   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.051014   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:51.051023   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:51.051038   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:51.088826   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:51.088852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:51.141369   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:51.141401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:51.155419   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:51.155450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:51.222640   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:51.222662   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:51.222675   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:49.133154   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.632559   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:48.905876   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.404543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.654814   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:54.153611   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.802706   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:53.815926   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:53.815985   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:53.847867   65605 cri.go:89] found id: ""
	I0723 15:23:53.847900   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.847913   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:53.847921   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:53.847981   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:53.881461   65605 cri.go:89] found id: ""
	I0723 15:23:53.881489   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.881499   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:53.881506   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:53.881569   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:53.921025   65605 cri.go:89] found id: ""
	I0723 15:23:53.921059   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.921070   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:53.921076   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:53.921135   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:53.955219   65605 cri.go:89] found id: ""
	I0723 15:23:53.955242   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.955250   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:53.955255   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:53.955318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:53.991874   65605 cri.go:89] found id: ""
	I0723 15:23:53.991905   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.991915   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:53.991922   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:53.991986   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:54.024702   65605 cri.go:89] found id: ""
	I0723 15:23:54.024735   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.024745   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:54.024752   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:54.024819   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:54.063778   65605 cri.go:89] found id: ""
	I0723 15:23:54.063801   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.063808   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:54.063813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:54.063861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:54.098194   65605 cri.go:89] found id: ""
	I0723 15:23:54.098222   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.098232   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:54.098244   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:54.098258   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:54.148576   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:54.148617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:54.162561   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:54.162596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:54.236614   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:54.236647   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:54.236663   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:54.315900   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:54.315932   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:53.632910   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.633683   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.404873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.904545   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:57.904874   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.153719   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:58.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.853674   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:56.867190   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:56.867270   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:56.901757   65605 cri.go:89] found id: ""
	I0723 15:23:56.901782   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.901792   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:56.901799   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:56.901858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:56.943877   65605 cri.go:89] found id: ""
	I0723 15:23:56.943909   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.943920   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:56.943926   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:56.943983   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:56.977156   65605 cri.go:89] found id: ""
	I0723 15:23:56.977186   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.977194   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:56.977200   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:56.977260   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:57.009251   65605 cri.go:89] found id: ""
	I0723 15:23:57.009280   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.009290   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:57.009297   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:57.009362   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:57.041196   65605 cri.go:89] found id: ""
	I0723 15:23:57.041225   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.041236   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:57.041243   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:57.041295   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:57.081725   65605 cri.go:89] found id: ""
	I0723 15:23:57.081752   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.081760   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:57.081765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:57.081810   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:57.114457   65605 cri.go:89] found id: ""
	I0723 15:23:57.114482   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.114490   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:57.114495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:57.114551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:57.149775   65605 cri.go:89] found id: ""
	I0723 15:23:57.149803   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.149814   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:57.149824   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:57.149838   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:57.197984   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:57.198014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:57.210717   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:57.210743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:57.271374   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:57.271392   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:57.271403   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:57.346151   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:57.346185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:59.882368   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:59.895184   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:59.895257   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:59.928859   65605 cri.go:89] found id: ""
	I0723 15:23:59.928891   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.928902   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:59.928909   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:59.928967   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:59.962441   65605 cri.go:89] found id: ""
	I0723 15:23:59.962472   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.962483   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:59.962491   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:59.962570   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:59.996637   65605 cri.go:89] found id: ""
	I0723 15:23:59.996659   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.996667   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:59.996672   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:59.996720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:00.029291   65605 cri.go:89] found id: ""
	I0723 15:24:00.029320   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.029330   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:00.029338   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:00.029387   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:00.060869   65605 cri.go:89] found id: ""
	I0723 15:24:00.060898   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.060907   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:00.060912   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:00.060993   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:00.092010   65605 cri.go:89] found id: ""
	I0723 15:24:00.092042   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.092054   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:00.092063   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:00.092128   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:00.124914   65605 cri.go:89] found id: ""
	I0723 15:24:00.124940   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.124949   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:00.124955   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:00.125016   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:00.159927   65605 cri.go:89] found id: ""
	I0723 15:24:00.159953   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.159962   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:00.159977   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:00.159993   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:00.209719   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:00.209764   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:00.224757   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:00.224784   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:00.292079   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:00.292100   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:00.292113   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:00.377382   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:00.377415   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:58.132374   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.133083   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:59.906087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.404839   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.655745   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.658870   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.153217   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.916818   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:02.931524   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:02.931594   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:02.966440   65605 cri.go:89] found id: ""
	I0723 15:24:02.966462   65605 logs.go:276] 0 containers: []
	W0723 15:24:02.966470   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:02.966475   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:02.966525   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:03.000833   65605 cri.go:89] found id: ""
	I0723 15:24:03.000857   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.000865   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:03.000870   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:03.000918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:03.035531   65605 cri.go:89] found id: ""
	I0723 15:24:03.035559   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.035570   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:03.035577   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:03.035636   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:03.068376   65605 cri.go:89] found id: ""
	I0723 15:24:03.068401   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.068411   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:03.068418   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:03.068479   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:03.102499   65605 cri.go:89] found id: ""
	I0723 15:24:03.102532   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.102543   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:03.102549   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:03.102600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:03.137173   65605 cri.go:89] found id: ""
	I0723 15:24:03.137198   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.137207   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:03.137215   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:03.137259   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:03.170652   65605 cri.go:89] found id: ""
	I0723 15:24:03.170677   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.170685   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:03.170690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:03.170748   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:03.204828   65605 cri.go:89] found id: ""
	I0723 15:24:03.204855   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.204864   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:03.204875   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:03.204895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:03.287370   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:03.287413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:03.323855   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:03.323888   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:03.379809   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:03.379846   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:03.392944   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:03.392971   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:03.465681   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:05.966635   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:05.979888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:05.979949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:06.013706   65605 cri.go:89] found id: ""
	I0723 15:24:06.013733   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.013740   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:06.013746   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:06.013794   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:06.046584   65605 cri.go:89] found id: ""
	I0723 15:24:06.046612   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.046622   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:06.046630   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:06.046690   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:06.077379   65605 cri.go:89] found id: ""
	I0723 15:24:06.077407   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.077416   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:06.077422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:06.077488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:06.108946   65605 cri.go:89] found id: ""
	I0723 15:24:06.108975   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.108986   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:06.108993   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:06.109058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:06.143082   65605 cri.go:89] found id: ""
	I0723 15:24:06.143115   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.143123   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:06.143129   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:06.143178   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:06.182735   65605 cri.go:89] found id: ""
	I0723 15:24:06.182762   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.182772   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:06.182779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:06.182839   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:06.217613   65605 cri.go:89] found id: ""
	I0723 15:24:06.217640   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.217650   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:06.217657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:06.217720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:06.252739   65605 cri.go:89] found id: ""
	I0723 15:24:06.252775   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.252787   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:06.252800   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:06.252814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:06.304325   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:06.304358   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:06.317426   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:06.317450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:06.384284   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:06.384313   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:06.384329   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:06.460936   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:06.460974   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:02.632839   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.132547   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:04.404942   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:06.406131   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:07.153476   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.154627   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.000304   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:09.013544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:09.013618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:09.046414   65605 cri.go:89] found id: ""
	I0723 15:24:09.046442   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.046452   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:09.046459   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:09.046522   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:09.083183   65605 cri.go:89] found id: ""
	I0723 15:24:09.083214   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.083225   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:09.083231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:09.083292   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:09.117524   65605 cri.go:89] found id: ""
	I0723 15:24:09.117568   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.117578   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:09.117585   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:09.117647   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:09.152624   65605 cri.go:89] found id: ""
	I0723 15:24:09.152652   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.152667   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:09.152674   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:09.152735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:09.186918   65605 cri.go:89] found id: ""
	I0723 15:24:09.186943   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.186951   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:09.186957   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:09.187017   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:09.219857   65605 cri.go:89] found id: ""
	I0723 15:24:09.219889   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.219909   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:09.219917   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:09.219980   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:09.253364   65605 cri.go:89] found id: ""
	I0723 15:24:09.253392   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.253402   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:09.253409   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:09.253469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:09.285049   65605 cri.go:89] found id: ""
	I0723 15:24:09.285072   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.285079   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:09.285088   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:09.285099   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:09.336011   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:09.336046   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:09.349643   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:09.349672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:09.428156   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:09.428181   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:09.428200   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:09.513917   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:09.513977   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:07.632840   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.636373   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:08.904674   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.405130   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.653749   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.153549   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:12.053554   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:12.067177   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:12.067242   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:12.097265   65605 cri.go:89] found id: ""
	I0723 15:24:12.097289   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.097298   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:12.097305   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:12.097378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:12.129832   65605 cri.go:89] found id: ""
	I0723 15:24:12.129858   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.129868   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:12.129876   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:12.129938   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:12.164173   65605 cri.go:89] found id: ""
	I0723 15:24:12.164202   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.164213   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:12.164221   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:12.164275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:12.196604   65605 cri.go:89] found id: ""
	I0723 15:24:12.196637   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.196648   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:12.196655   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:12.196725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:12.239120   65605 cri.go:89] found id: ""
	I0723 15:24:12.239149   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.239158   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:12.239164   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:12.239232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:12.273806   65605 cri.go:89] found id: ""
	I0723 15:24:12.273836   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.273847   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:12.273855   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:12.273908   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:12.305937   65605 cri.go:89] found id: ""
	I0723 15:24:12.305965   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.305976   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:12.305984   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:12.306045   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:12.337795   65605 cri.go:89] found id: ""
	I0723 15:24:12.337822   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.337830   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:12.337839   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:12.337850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:12.390476   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:12.390512   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:12.405397   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:12.405422   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:12.474687   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:12.474711   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:12.474730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:12.551302   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:12.551341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:15.094530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:15.108194   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:15.108267   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:15.141068   65605 cri.go:89] found id: ""
	I0723 15:24:15.141095   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.141103   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:15.141109   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:15.141167   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:15.176226   65605 cri.go:89] found id: ""
	I0723 15:24:15.176260   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.176276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:15.176284   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:15.176348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:15.209086   65605 cri.go:89] found id: ""
	I0723 15:24:15.209115   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.209123   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:15.209128   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:15.209175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:15.245808   65605 cri.go:89] found id: ""
	I0723 15:24:15.245842   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.245853   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:15.245863   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:15.245926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:15.277680   65605 cri.go:89] found id: ""
	I0723 15:24:15.277710   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.277720   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:15.277728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:15.277789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:15.308419   65605 cri.go:89] found id: ""
	I0723 15:24:15.308443   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.308450   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:15.308456   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:15.308515   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:15.340785   65605 cri.go:89] found id: ""
	I0723 15:24:15.340812   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.340820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:15.340825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:15.340871   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:15.376014   65605 cri.go:89] found id: ""
	I0723 15:24:15.376040   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.376050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:15.376061   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:15.376074   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:15.427672   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:15.427706   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:15.441726   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:15.441755   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:15.508628   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:15.508659   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:15.508674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:15.589246   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:15.589284   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:12.133283   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.632399   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:13.905548   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.405913   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.652810   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.653725   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.128036   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:18.141529   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:18.141604   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:18.176401   65605 cri.go:89] found id: ""
	I0723 15:24:18.176434   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.176446   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:18.176453   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:18.176507   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:18.209833   65605 cri.go:89] found id: ""
	I0723 15:24:18.209868   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.209878   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:18.209886   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:18.209949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:18.243094   65605 cri.go:89] found id: ""
	I0723 15:24:18.243129   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.243139   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:18.243146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:18.243211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:18.275929   65605 cri.go:89] found id: ""
	I0723 15:24:18.275957   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.275968   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:18.275980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:18.276037   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:18.309064   65605 cri.go:89] found id: ""
	I0723 15:24:18.309095   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.309103   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:18.309109   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:18.309171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:18.345446   65605 cri.go:89] found id: ""
	I0723 15:24:18.345475   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.345485   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:18.345491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:18.345552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:18.381774   65605 cri.go:89] found id: ""
	I0723 15:24:18.381808   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.381820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:18.381827   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:18.381881   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:18.435663   65605 cri.go:89] found id: ""
	I0723 15:24:18.435692   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.435706   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:18.435716   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:18.435729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.471152   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:18.471184   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:18.523114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:18.523146   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:18.536555   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:18.536594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:18.607773   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:18.607792   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:18.607803   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.192781   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:21.205337   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:21.205403   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:21.242125   65605 cri.go:89] found id: ""
	I0723 15:24:21.242155   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.242163   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:21.242170   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:21.242243   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:21.279245   65605 cri.go:89] found id: ""
	I0723 15:24:21.279274   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.279286   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:21.279295   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:21.279361   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:21.311316   65605 cri.go:89] found id: ""
	I0723 15:24:21.311340   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.311348   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:21.311355   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:21.311415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:21.344444   65605 cri.go:89] found id: ""
	I0723 15:24:21.344468   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.344478   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:21.344485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:21.344545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:21.381055   65605 cri.go:89] found id: ""
	I0723 15:24:21.381082   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.381092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:21.381099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:21.381158   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:21.416593   65605 cri.go:89] found id: ""
	I0723 15:24:21.416621   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.416633   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:21.416643   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:21.416706   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:21.448345   65605 cri.go:89] found id: ""
	I0723 15:24:21.448368   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.448377   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:21.448382   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:21.448426   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:21.481810   65605 cri.go:89] found id: ""
	I0723 15:24:21.481836   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.481843   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:21.481852   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:21.481874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:21.545200   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:21.545227   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:21.545244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.626037   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:21.626073   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:21.667961   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:21.667998   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:21.718622   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:21.718662   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:17.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:19.632774   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.632954   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.905257   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:20.906323   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.153330   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.153495   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:24.233086   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:24.247111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:24.247175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:24.281818   65605 cri.go:89] found id: ""
	I0723 15:24:24.281850   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.281861   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:24.281868   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:24.281924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:24.315621   65605 cri.go:89] found id: ""
	I0723 15:24:24.315647   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.315656   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:24.315664   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:24.315722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:24.350355   65605 cri.go:89] found id: ""
	I0723 15:24:24.350400   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.350410   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:24.350417   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:24.350498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:24.384584   65605 cri.go:89] found id: ""
	I0723 15:24:24.384611   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.384619   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:24.384625   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:24.384671   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:24.423669   65605 cri.go:89] found id: ""
	I0723 15:24:24.423694   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.423701   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:24.423707   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:24.423754   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:24.456572   65605 cri.go:89] found id: ""
	I0723 15:24:24.456599   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.456606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:24.456611   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:24.456659   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:24.488024   65605 cri.go:89] found id: ""
	I0723 15:24:24.488047   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.488055   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:24.488061   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:24.488109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:24.519311   65605 cri.go:89] found id: ""
	I0723 15:24:24.519344   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.519352   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:24.519360   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:24.519371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:24.568552   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:24.568594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.581845   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:24.581874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:24.650455   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:24.650478   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:24.650492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:24.728143   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:24.728179   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:23.633012   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:26.132417   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.405046   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.906015   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.654555   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.152778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.268112   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:27.281947   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:27.282025   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:27.315489   65605 cri.go:89] found id: ""
	I0723 15:24:27.315517   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.315528   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:27.315536   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:27.315599   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:27.348481   65605 cri.go:89] found id: ""
	I0723 15:24:27.348509   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.348519   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:27.348526   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:27.348580   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:27.380628   65605 cri.go:89] found id: ""
	I0723 15:24:27.380659   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.380668   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:27.380673   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:27.380731   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:27.413647   65605 cri.go:89] found id: ""
	I0723 15:24:27.413679   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.413688   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:27.413693   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:27.413744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:27.450398   65605 cri.go:89] found id: ""
	I0723 15:24:27.450425   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.450436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:27.450442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:27.450494   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:27.489071   65605 cri.go:89] found id: ""
	I0723 15:24:27.489101   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.489117   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:27.489125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:27.489190   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:27.529785   65605 cri.go:89] found id: ""
	I0723 15:24:27.529813   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.529823   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:27.529829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:27.529876   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:27.560811   65605 cri.go:89] found id: ""
	I0723 15:24:27.560843   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.560855   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:27.560866   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:27.560882   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:27.574078   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:27.574100   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:27.636153   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:27.636179   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:27.636194   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:27.714001   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:27.714041   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.751396   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:27.751428   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.307581   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:30.319762   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:30.319823   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:30.354317   65605 cri.go:89] found id: ""
	I0723 15:24:30.354341   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.354349   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:30.354355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:30.354429   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:30.389994   65605 cri.go:89] found id: ""
	I0723 15:24:30.390026   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.390039   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:30.390048   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:30.390122   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:30.428854   65605 cri.go:89] found id: ""
	I0723 15:24:30.428878   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.428887   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:30.428893   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:30.428966   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:30.461727   65605 cri.go:89] found id: ""
	I0723 15:24:30.461752   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.461759   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:30.461765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:30.461813   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:30.494777   65605 cri.go:89] found id: ""
	I0723 15:24:30.494799   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.494807   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:30.494813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:30.494858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:30.531918   65605 cri.go:89] found id: ""
	I0723 15:24:30.531943   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.531954   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:30.531960   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:30.532034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:30.590683   65605 cri.go:89] found id: ""
	I0723 15:24:30.590710   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.590720   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:30.590727   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:30.590772   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:30.636073   65605 cri.go:89] found id: ""
	I0723 15:24:30.636104   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.636114   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:30.636124   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:30.636138   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.686233   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:30.686268   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:30.700266   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:30.700308   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:30.773850   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:30.773868   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:30.773879   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:30.854428   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:30.854464   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:28.633061   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.633604   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:28.404488   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.406038   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.905405   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.653390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.153739   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:33.393374   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:33.406722   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:33.406779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:33.440555   65605 cri.go:89] found id: ""
	I0723 15:24:33.440585   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.440596   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:33.440604   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:33.440666   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:33.473363   65605 cri.go:89] found id: ""
	I0723 15:24:33.473389   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.473398   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:33.473405   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:33.473469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:33.509772   65605 cri.go:89] found id: ""
	I0723 15:24:33.509805   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.509816   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:33.509829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:33.509896   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:33.546578   65605 cri.go:89] found id: ""
	I0723 15:24:33.546605   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.546613   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:33.546618   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:33.546686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:33.582735   65605 cri.go:89] found id: ""
	I0723 15:24:33.582759   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.582766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:33.582771   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:33.582831   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:33.619013   65605 cri.go:89] found id: ""
	I0723 15:24:33.619039   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.619048   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:33.619053   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:33.619110   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:33.655967   65605 cri.go:89] found id: ""
	I0723 15:24:33.655988   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.655995   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:33.656001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:33.656058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:33.694266   65605 cri.go:89] found id: ""
	I0723 15:24:33.694303   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.694311   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:33.694319   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:33.694330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:33.744464   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:33.744504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:33.759314   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:33.759342   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:33.832308   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:33.832331   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:33.832364   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:33.910820   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:33.910860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.452804   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:36.465137   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:36.465224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:36.504340   65605 cri.go:89] found id: ""
	I0723 15:24:36.504371   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.504380   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:36.504385   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:36.504436   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:36.539113   65605 cri.go:89] found id: ""
	I0723 15:24:36.539138   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.539147   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:36.539154   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:36.539215   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:36.572443   65605 cri.go:89] found id: ""
	I0723 15:24:36.572468   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.572478   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:36.572485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:36.572540   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:36.605366   65605 cri.go:89] found id: ""
	I0723 15:24:36.605391   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.605398   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:36.605404   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:36.605467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:36.637467   65605 cri.go:89] found id: ""
	I0723 15:24:36.637496   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.637506   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:36.637513   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:36.637576   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:36.674630   65605 cri.go:89] found id: ""
	I0723 15:24:36.674652   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.674661   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:36.674669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:36.674722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:36.707409   65605 cri.go:89] found id: ""
	I0723 15:24:36.707500   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.707511   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:36.707525   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:36.707581   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:36.742746   65605 cri.go:89] found id: ""
	I0723 15:24:36.742771   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.742778   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:36.742786   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:36.742800   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.776474   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:36.776498   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:36.826256   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:36.826289   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:36.839568   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:36.839596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:24:33.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.632486   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.405071   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.406177   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.653785   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.654028   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	W0723 15:24:36.906055   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:36.906082   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:36.906095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:39.483791   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:39.496085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:39.496150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:39.527545   65605 cri.go:89] found id: ""
	I0723 15:24:39.527573   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.527583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:39.527590   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:39.527653   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:39.562024   65605 cri.go:89] found id: ""
	I0723 15:24:39.562051   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.562060   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:39.562066   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:39.562115   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:39.600294   65605 cri.go:89] found id: ""
	I0723 15:24:39.600317   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.600324   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:39.600329   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:39.600378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:39.635629   65605 cri.go:89] found id: ""
	I0723 15:24:39.635653   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.635663   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:39.635669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:39.635729   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:39.672815   65605 cri.go:89] found id: ""
	I0723 15:24:39.672843   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.672854   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:39.672861   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:39.672924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:39.705965   65605 cri.go:89] found id: ""
	I0723 15:24:39.705999   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.706009   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:39.706023   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:39.706077   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:39.739262   65605 cri.go:89] found id: ""
	I0723 15:24:39.739288   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.739298   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:39.739304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:39.739373   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:39.771786   65605 cri.go:89] found id: ""
	I0723 15:24:39.771811   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.771820   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:39.771831   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:39.771844   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:39.813596   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:39.813628   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:39.861596   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:39.861629   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:39.875843   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:39.875867   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:39.947917   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:39.947941   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:39.947958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:38.135033   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:40.633462   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.906043   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.404845   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.153505   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:44.154094   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.530636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:42.543636   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:42.543718   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:42.576613   65605 cri.go:89] found id: ""
	I0723 15:24:42.576642   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.576652   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:42.576659   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:42.576723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:42.611422   65605 cri.go:89] found id: ""
	I0723 15:24:42.611452   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.611460   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:42.611465   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:42.611514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:42.647346   65605 cri.go:89] found id: ""
	I0723 15:24:42.647370   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.647380   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:42.647386   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:42.647447   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:42.683587   65605 cri.go:89] found id: ""
	I0723 15:24:42.683614   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.683622   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:42.683627   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:42.683673   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:42.715688   65605 cri.go:89] found id: ""
	I0723 15:24:42.715709   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.715717   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:42.715723   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:42.715775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:42.749589   65605 cri.go:89] found id: ""
	I0723 15:24:42.749624   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.749632   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:42.749637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:42.749684   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:42.786668   65605 cri.go:89] found id: ""
	I0723 15:24:42.786694   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.786702   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:42.786708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:42.786757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:42.821541   65605 cri.go:89] found id: ""
	I0723 15:24:42.821574   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.821585   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:42.821597   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:42.821612   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:42.873689   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:42.873720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:42.886689   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:42.886719   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:42.958057   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:42.958078   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:42.958093   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:43.042738   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:43.042771   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:45.580764   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:45.593331   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:45.593402   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:45.632356   65605 cri.go:89] found id: ""
	I0723 15:24:45.632386   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.632397   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:45.632404   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:45.632460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:45.674319   65605 cri.go:89] found id: ""
	I0723 15:24:45.674353   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.674363   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:45.674371   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:45.674450   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:45.718577   65605 cri.go:89] found id: ""
	I0723 15:24:45.718608   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.718616   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:45.718622   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:45.718686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:45.758866   65605 cri.go:89] found id: ""
	I0723 15:24:45.758894   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.758901   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:45.758907   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:45.758954   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:45.795098   65605 cri.go:89] found id: ""
	I0723 15:24:45.795124   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.795134   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:45.795148   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:45.795224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:45.832205   65605 cri.go:89] found id: ""
	I0723 15:24:45.832236   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.832257   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:45.832266   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:45.832348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:45.867679   65605 cri.go:89] found id: ""
	I0723 15:24:45.867713   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.867725   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:45.867733   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:45.867799   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:45.904960   65605 cri.go:89] found id: ""
	I0723 15:24:45.904999   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.905010   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:45.905022   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:45.905036   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:45.962373   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:45.962434   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:45.978670   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:45.978715   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:46.050765   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:46.050795   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:46.050811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:46.145347   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:46.145387   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:43.132518   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:45.133735   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:43.399717   65177 pod_ready.go:81] duration metric: took 4m0.000898156s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	E0723 15:24:43.399747   65177 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0723 15:24:43.399766   65177 pod_ready.go:38] duration metric: took 4m8.000231971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:24:43.399796   65177 kubeadm.go:597] duration metric: took 4m15.901150134s to restartPrimaryControlPlane
	W0723 15:24:43.399891   65177 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:43.399930   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:46.154147   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.653381   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.691420   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:48.704605   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:48.704662   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:48.736998   65605 cri.go:89] found id: ""
	I0723 15:24:48.737030   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.737040   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:48.737048   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:48.737116   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:48.770428   65605 cri.go:89] found id: ""
	I0723 15:24:48.770456   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.770466   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:48.770474   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:48.770534   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:48.804036   65605 cri.go:89] found id: ""
	I0723 15:24:48.804063   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.804073   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:48.804080   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:48.804140   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:48.841221   65605 cri.go:89] found id: ""
	I0723 15:24:48.841247   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.841256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:48.841263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:48.841345   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:48.877239   65605 cri.go:89] found id: ""
	I0723 15:24:48.877269   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.877280   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:48.877288   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:48.877348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:48.910120   65605 cri.go:89] found id: ""
	I0723 15:24:48.910144   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.910153   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:48.910161   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:48.910222   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:48.944831   65605 cri.go:89] found id: ""
	I0723 15:24:48.944861   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.944872   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:48.944881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:48.944936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:48.978782   65605 cri.go:89] found id: ""
	I0723 15:24:48.978811   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.978821   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:48.978832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:48.978850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:49.031863   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:49.031900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:49.045173   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:49.045196   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:49.115607   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:49.115632   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:49.115644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:49.195137   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:49.195186   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:51.732915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:51.746885   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:51.746970   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:51.787857   65605 cri.go:89] found id: ""
	I0723 15:24:51.787878   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.787885   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:51.787890   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:51.787933   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:51.826515   65605 cri.go:89] found id: ""
	I0723 15:24:51.826537   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.826545   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:51.826550   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:51.826611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:47.634980   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:50.132905   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.153224   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:53.153400   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.863825   65605 cri.go:89] found id: ""
	I0723 15:24:51.863867   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.863878   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:51.863884   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:51.863936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:51.901367   65605 cri.go:89] found id: ""
	I0723 15:24:51.901403   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.901414   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:51.901422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:51.901474   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:51.933270   65605 cri.go:89] found id: ""
	I0723 15:24:51.933303   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.933314   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:51.933321   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:51.933385   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:51.965174   65605 cri.go:89] found id: ""
	I0723 15:24:51.965205   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.965217   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:51.965227   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:51.965296   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:51.999785   65605 cri.go:89] found id: ""
	I0723 15:24:51.999812   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.999822   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:51.999841   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:51.999914   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:52.035592   65605 cri.go:89] found id: ""
	I0723 15:24:52.035619   65605 logs.go:276] 0 containers: []
	W0723 15:24:52.035630   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:52.035641   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:52.035656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:52.048683   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:52.048711   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:52.112319   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:52.112338   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:52.112351   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:52.196596   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:52.196632   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:52.235608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:52.235635   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:54.786414   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:54.799864   65605 kubeadm.go:597] duration metric: took 4m4.703331486s to restartPrimaryControlPlane
	W0723 15:24:54.799946   65605 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:54.799996   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:52.134857   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:54.633070   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:55.653385   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.154569   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.675405   65605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.875388525s)
	I0723 15:24:58.675461   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:24:58.689878   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:24:58.699568   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:24:58.708541   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:24:58.708559   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:24:58.708604   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:24:58.717055   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:24:58.717108   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:24:58.725736   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:24:58.734127   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:24:58.734227   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:24:58.742862   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.750696   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:24:58.750747   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.759235   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:24:58.768036   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:24:58.768094   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:24:58.777299   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:24:58.976177   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:24:57.133412   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:59.633162   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:00.652486   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.653128   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.654556   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.132762   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.134714   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:06.632391   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:07.152861   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:09.153443   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:08.633329   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.133963   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.652964   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:13.653225   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:14.921745   65177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.521789017s)
	I0723 15:25:14.921814   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:14.937627   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:25:14.948238   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:25:14.958145   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:25:14.958171   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:25:14.958223   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:25:14.967224   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:25:14.967282   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:25:14.975995   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:25:14.984981   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:25:14.985040   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:25:14.993733   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.002214   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:25:15.002265   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.012952   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:25:15.022716   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:25:15.022775   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:25:15.032954   65177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:25:15.081347   65177 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 15:25:15.081412   65177 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:25:15.217189   65177 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:25:15.217316   65177 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:25:15.217421   65177 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:25:15.414012   65177 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:25:15.415975   65177 out.go:204]   - Generating certificates and keys ...
	I0723 15:25:15.416086   65177 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:25:15.416172   65177 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:25:15.416284   65177 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:25:15.416378   65177 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:25:15.416512   65177 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:25:15.416600   65177 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:25:15.416690   65177 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:25:15.416781   65177 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:25:15.416901   65177 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:25:15.417027   65177 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:25:15.417091   65177 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:25:15.417169   65177 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:25:15.577526   65177 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:25:15.771865   65177 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 15:25:15.968841   65177 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:25:16.376626   65177 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:25:16.569425   65177 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:25:16.570004   65177 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:25:16.572623   65177 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:25:13.633779   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.133051   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.574399   65177 out.go:204]   - Booting up control plane ...
	I0723 15:25:16.574516   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:25:16.574622   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:25:16.575046   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:25:16.594177   65177 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:25:16.595205   65177 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:25:16.595310   65177 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:25:16.739893   65177 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 15:25:16.740022   65177 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 15:25:17.242030   65177 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.858581ms
	I0723 15:25:17.242119   65177 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 15:25:15.653757   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.153924   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:20.154226   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.634047   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:21.132773   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:22.244539   65177 kubeadm.go:310] [api-check] The API server is healthy after 5.002291296s
	I0723 15:25:22.260367   65177 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 15:25:22.272659   65177 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 15:25:22.304686   65177 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 15:25:22.304939   65177 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-486436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 15:25:22.318299   65177 kubeadm.go:310] [bootstrap-token] Using token: 1476j9.4ihrwdjbg4aq5odf
	I0723 15:25:22.319736   65177 out.go:204]   - Configuring RBAC rules ...
	I0723 15:25:22.319899   65177 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 15:25:22.329081   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 15:25:22.340687   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 15:25:22.344962   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 15:25:22.348526   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 15:25:22.355955   65177 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 15:25:22.652467   65177 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 15:25:23.122105   65177 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 15:25:23.653074   65177 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 15:25:23.654335   65177 kubeadm.go:310] 
	I0723 15:25:23.654448   65177 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 15:25:23.654461   65177 kubeadm.go:310] 
	I0723 15:25:23.654580   65177 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 15:25:23.654599   65177 kubeadm.go:310] 
	I0723 15:25:23.654648   65177 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 15:25:23.654721   65177 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 15:25:23.654796   65177 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 15:25:23.654821   65177 kubeadm.go:310] 
	I0723 15:25:23.654902   65177 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 15:25:23.654925   65177 kubeadm.go:310] 
	I0723 15:25:23.655000   65177 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 15:25:23.655010   65177 kubeadm.go:310] 
	I0723 15:25:23.655076   65177 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 15:25:23.655174   65177 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 15:25:23.655256   65177 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 15:25:23.655264   65177 kubeadm.go:310] 
	I0723 15:25:23.655352   65177 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 15:25:23.655440   65177 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 15:25:23.655459   65177 kubeadm.go:310] 
	I0723 15:25:23.655579   65177 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.655719   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 15:25:23.655752   65177 kubeadm.go:310] 	--control-plane 
	I0723 15:25:23.655771   65177 kubeadm.go:310] 
	I0723 15:25:23.655896   65177 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 15:25:23.655904   65177 kubeadm.go:310] 
	I0723 15:25:23.656005   65177 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.656141   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 15:25:23.656644   65177 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:25:23.656674   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:25:23.656686   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:25:23.659688   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:25:22.653874   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:24.654172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.133652   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:25.633189   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.660997   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:25:23.671788   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:25:23.692109   65177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:25:23.692195   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:23.692199   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-486436 minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=embed-certs-486436 minikube.k8s.io/primary=true
	I0723 15:25:23.716101   65177 ops.go:34] apiserver oom_adj: -16
	I0723 15:25:23.905952   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.405980   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.906787   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.406096   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.906365   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.406501   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.906068   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.406018   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.907033   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.153085   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.653377   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:27.633816   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.133531   66641 pod_ready.go:81] duration metric: took 4m0.007080073s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:29.133554   66641 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:29.133561   66641 pod_ready.go:38] duration metric: took 4m4.545428088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:29.133577   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:29.133601   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:29.133646   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:29.179796   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:29.179818   66641 cri.go:89] found id: ""
	I0723 15:25:29.179830   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:29.179882   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.184024   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:29.184095   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:29.219711   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:29.219740   66641 cri.go:89] found id: ""
	I0723 15:25:29.219749   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:29.219814   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.223687   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:29.223761   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:29.258473   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:29.258498   66641 cri.go:89] found id: ""
	I0723 15:25:29.258508   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:29.258556   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.262789   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:29.262857   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:29.304206   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.304233   66641 cri.go:89] found id: ""
	I0723 15:25:29.304242   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:29.304306   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.309658   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:29.309735   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:29.361664   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:29.361690   66641 cri.go:89] found id: ""
	I0723 15:25:29.361699   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:29.361758   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.366171   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:29.366248   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:29.414069   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:29.414094   66641 cri.go:89] found id: ""
	I0723 15:25:29.414104   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:29.414162   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.419607   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:29.419678   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:29.464533   66641 cri.go:89] found id: ""
	I0723 15:25:29.464563   66641 logs.go:276] 0 containers: []
	W0723 15:25:29.464573   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:29.464580   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:29.464640   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:29.499966   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:29.499991   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:29.499996   66641 cri.go:89] found id: ""
	I0723 15:25:29.500006   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:29.500063   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.503961   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.508088   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:29.508109   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:29.653373   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:29.653403   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.694171   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:29.694205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:30.262503   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:30.262559   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:30.304038   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:30.304070   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:30.357964   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:30.358013   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:30.372263   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:30.372296   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:30.418543   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:30.418583   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:30.470018   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:30.470050   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:30.503538   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:30.503579   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:30.538515   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:30.538554   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:30.599104   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:30.599137   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:30.635841   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:30.635867   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:28.406535   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:28.906729   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.406804   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.906364   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.406245   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.906646   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.406143   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.906645   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.406411   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.906643   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.653490   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.654773   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.406893   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:33.906016   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.406827   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.906668   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.406337   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.906162   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.406864   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.502155   65177 kubeadm.go:1113] duration metric: took 12.810025657s to wait for elevateKubeSystemPrivileges
	I0723 15:25:36.502200   65177 kubeadm.go:394] duration metric: took 5m9.050239878s to StartCluster
	I0723 15:25:36.502225   65177 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.502332   65177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:25:36.504959   65177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.505284   65177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:25:36.505373   65177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:25:36.505452   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:25:36.505461   65177 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-486436"
	I0723 15:25:36.505486   65177 addons.go:69] Setting metrics-server=true in profile "embed-certs-486436"
	I0723 15:25:36.505494   65177 addons.go:69] Setting default-storageclass=true in profile "embed-certs-486436"
	I0723 15:25:36.505509   65177 addons.go:234] Setting addon metrics-server=true in "embed-certs-486436"
	W0723 15:25:36.505518   65177 addons.go:243] addon metrics-server should already be in state true
	I0723 15:25:36.505535   65177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-486436"
	I0723 15:25:36.505541   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505487   65177 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-486436"
	W0723 15:25:36.505635   65177 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:25:36.505652   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505919   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505938   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505950   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505959   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.505987   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.506050   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.507034   65177 out.go:177] * Verifying Kubernetes components...
	I0723 15:25:36.508493   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:25:36.521500   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0723 15:25:36.521508   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0723 15:25:36.521836   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0723 15:25:36.522060   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522168   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522198   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522626   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522674   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522696   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522710   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522713   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522724   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.523009   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523043   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523309   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523454   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.523518   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523542   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.523629   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523665   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.527348   65177 addons.go:234] Setting addon default-storageclass=true in "embed-certs-486436"
	W0723 15:25:36.527370   65177 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:25:36.527399   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.527752   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.527784   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.540037   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0723 15:25:36.540208   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0723 15:25:36.540572   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.540689   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.541105   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541113   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541122   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541123   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541455   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541454   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541618   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.541686   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.543525   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.543999   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.545455   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0723 15:25:36.545800   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.545846   65177 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:25:36.545906   65177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:25:33.172857   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:33.188951   66641 api_server.go:72] duration metric: took 4m16.32591009s to wait for apiserver process to appear ...
	I0723 15:25:33.188979   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:33.189022   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:33.189077   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:33.228175   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.228204   66641 cri.go:89] found id: ""
	I0723 15:25:33.228213   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:33.228271   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.232451   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:33.232518   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:33.268343   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.268362   66641 cri.go:89] found id: ""
	I0723 15:25:33.268371   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:33.268426   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.272333   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:33.272388   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:33.305913   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.305936   66641 cri.go:89] found id: ""
	I0723 15:25:33.305945   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:33.305998   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.310500   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:33.310573   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:33.345773   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.345798   66641 cri.go:89] found id: ""
	I0723 15:25:33.345807   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:33.345872   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.350031   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:33.350084   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:33.383305   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.383331   66641 cri.go:89] found id: ""
	I0723 15:25:33.383341   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:33.383399   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.387279   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:33.387331   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:33.428442   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.428468   66641 cri.go:89] found id: ""
	I0723 15:25:33.428478   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:33.428676   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.432814   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:33.432879   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:33.469064   66641 cri.go:89] found id: ""
	I0723 15:25:33.469093   66641 logs.go:276] 0 containers: []
	W0723 15:25:33.469105   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:33.469112   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:33.469164   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:33.509131   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:33.509161   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.509168   66641 cri.go:89] found id: ""
	I0723 15:25:33.509177   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:33.509240   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.513478   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.517125   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:33.517152   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.554974   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:33.555004   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.606042   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:33.606074   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.648068   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:33.648100   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:33.698660   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:33.698690   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:33.797480   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:33.797508   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:33.812119   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:33.812146   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.863628   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:33.863661   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.913667   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:33.913695   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.949115   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:33.949144   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.988180   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:33.988205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:34.023679   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:34.023705   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:34.481829   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:34.481886   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:36.546218   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.546238   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.546607   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.547165   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.547209   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.547534   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:25:36.547548   65177 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:25:36.547565   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.547735   65177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.547752   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:25:36.547771   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.551130   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551764   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551767   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.551800   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551844   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551871   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.552160   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.552187   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552429   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552608   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552606   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.552797   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.567445   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0723 15:25:36.567912   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.568411   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.568432   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.568752   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.568949   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.570216   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.570524   65177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.570580   65177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:25:36.570620   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.572949   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573375   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.573402   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573509   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.573658   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.573787   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.573918   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.722640   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:25:36.756372   65177 node_ready.go:35] waiting up to 6m0s for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.779995   65177 node_ready.go:49] node "embed-certs-486436" has status "Ready":"True"
	I0723 15:25:36.780025   65177 node_ready.go:38] duration metric: took 23.62289ms for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.780039   65177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:36.807738   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.810749   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:36.820589   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:25:36.820613   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:25:36.880548   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:25:36.880581   65177 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:25:36.961807   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.962203   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:36.962229   65177 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:25:37.055123   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:37.148724   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.148749   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149038   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149096   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.149114   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.149123   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149412   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149432   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161152   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.161173   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.161477   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.161496   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161496   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.119897   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158050831s)
	I0723 15:25:38.120002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120022   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120358   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.120383   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.120399   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120361   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122012   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122234   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.122252   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.401938   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.346767402s)
	I0723 15:25:38.402002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402019   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402366   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402391   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402401   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402409   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402725   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.402738   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402762   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402773   65177 addons.go:475] Verifying addon metrics-server=true in "embed-certs-486436"
	I0723 15:25:38.404515   65177 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0723 15:25:36.154127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.155104   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.405850   65177 addons.go:510] duration metric: took 1.90047622s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0723 15:25:38.816969   65177 pod_ready.go:102] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:39.316609   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.316632   65177 pod_ready.go:81] duration metric: took 2.505858486s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.316642   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327865   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.327890   65177 pod_ready.go:81] duration metric: took 11.242778ms for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327900   65177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332886   65177 pod_ready.go:92] pod "etcd-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.332914   65177 pod_ready.go:81] duration metric: took 5.006846ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332925   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337166   65177 pod_ready.go:92] pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.337183   65177 pod_ready.go:81] duration metric: took 4.252609ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337198   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341748   65177 pod_ready.go:92] pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.341762   65177 pod_ready.go:81] duration metric: took 4.559215ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341771   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714214   65177 pod_ready.go:92] pod "kube-proxy-wzh4d" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.714237   65177 pod_ready.go:81] duration metric: took 372.459367ms for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714247   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114721   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:40.114744   65177 pod_ready.go:81] duration metric: took 400.490439ms for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114752   65177 pod_ready.go:38] duration metric: took 3.334700958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:40.114765   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:40.114821   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:40.130577   65177 api_server.go:72] duration metric: took 3.625254211s to wait for apiserver process to appear ...
	I0723 15:25:40.130607   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:40.130624   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:25:40.134690   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:25:40.135639   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:40.135658   65177 api_server.go:131] duration metric: took 5.04581ms to wait for apiserver health ...
	I0723 15:25:40.135665   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:40.318436   65177 system_pods.go:59] 9 kube-system pods found
	I0723 15:25:40.318466   65177 system_pods.go:61] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.318471   65177 system_pods.go:61] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.318474   65177 system_pods.go:61] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.318478   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.318481   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.318484   65177 system_pods.go:61] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.318487   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.318492   65177 system_pods.go:61] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.318497   65177 system_pods.go:61] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.318506   65177 system_pods.go:74] duration metric: took 182.836785ms to wait for pod list to return data ...
	I0723 15:25:40.318514   65177 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.514737   65177 default_sa.go:45] found service account: "default"
	I0723 15:25:40.514768   65177 default_sa.go:55] duration metric: took 196.245408ms for default service account to be created ...
	I0723 15:25:40.514779   65177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.718646   65177 system_pods.go:86] 9 kube-system pods found
	I0723 15:25:40.718675   65177 system_pods.go:89] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.718684   65177 system_pods.go:89] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.718690   65177 system_pods.go:89] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.718696   65177 system_pods.go:89] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.718702   65177 system_pods.go:89] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.718707   65177 system_pods.go:89] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.718713   65177 system_pods.go:89] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.718721   65177 system_pods.go:89] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.718728   65177 system_pods.go:89] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.718743   65177 system_pods.go:126] duration metric: took 203.95636ms to wait for k8s-apps to be running ...
	I0723 15:25:40.718756   65177 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.718809   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.733038   65177 system_svc.go:56] duration metric: took 14.275362ms WaitForService to wait for kubelet
	I0723 15:25:40.733069   65177 kubeadm.go:582] duration metric: took 4.227749087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.733088   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.914859   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.914886   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.914898   65177 node_conditions.go:105] duration metric: took 181.804872ms to run NodePressure ...
	I0723 15:25:40.914909   65177 start.go:241] waiting for startup goroutines ...
	I0723 15:25:40.914918   65177 start.go:246] waiting for cluster config update ...
	I0723 15:25:40.914932   65177 start.go:255] writing updated cluster config ...
	I0723 15:25:40.915235   65177 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:40.963735   65177 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:40.966048   65177 out.go:177] * Done! kubectl is now configured to use "embed-certs-486436" cluster and "default" namespace by default
	I0723 15:25:37.033161   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:25:37.039656   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:25:37.040745   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:37.040768   66641 api_server.go:131] duration metric: took 3.851781875s to wait for apiserver health ...
	I0723 15:25:37.040781   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:37.040807   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:37.040868   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:37.090495   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:37.090524   66641 cri.go:89] found id: ""
	I0723 15:25:37.090533   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:37.090608   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.094934   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:37.095005   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:37.138911   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.138937   66641 cri.go:89] found id: ""
	I0723 15:25:37.138947   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:37.139006   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.143876   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:37.143937   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:37.187419   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.187446   66641 cri.go:89] found id: ""
	I0723 15:25:37.187455   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:37.187514   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.191818   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:37.191896   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:37.232332   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:37.232358   66641 cri.go:89] found id: ""
	I0723 15:25:37.232366   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:37.232414   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.236718   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:37.236795   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:37.273231   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.273259   66641 cri.go:89] found id: ""
	I0723 15:25:37.273269   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:37.273339   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.279499   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:37.279575   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:37.316848   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.316867   66641 cri.go:89] found id: ""
	I0723 15:25:37.316875   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:37.316931   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.321920   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:37.321991   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:37.361804   66641 cri.go:89] found id: ""
	I0723 15:25:37.361833   66641 logs.go:276] 0 containers: []
	W0723 15:25:37.361844   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:37.361850   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:37.361909   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:37.401687   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:37.401715   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:37.401720   66641 cri.go:89] found id: ""
	I0723 15:25:37.401729   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:37.401788   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.406444   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.410788   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:37.410812   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:37.427033   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:37.427063   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:37.567851   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:37.567884   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.633966   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:37.634003   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.679663   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:37.679701   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.715046   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:37.715084   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.779870   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:37.779917   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:38.166491   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:38.166527   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:38.222592   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:38.222625   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:38.282823   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:38.282864   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:38.320076   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:38.320114   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:38.361845   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:38.361873   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:38.404791   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:38.404818   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:40.969345   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:25:40.969373   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.969378   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.969384   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.969388   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.969392   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.969395   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.969403   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.969407   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.969419   66641 system_pods.go:74] duration metric: took 3.928631967s to wait for pod list to return data ...
	I0723 15:25:40.969430   66641 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.971647   66641 default_sa.go:45] found service account: "default"
	I0723 15:25:40.971668   66641 default_sa.go:55] duration metric: took 2.232202ms for default service account to be created ...
	I0723 15:25:40.971675   66641 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.976760   66641 system_pods.go:86] 8 kube-system pods found
	I0723 15:25:40.976782   66641 system_pods.go:89] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.976787   66641 system_pods.go:89] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.976793   66641 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.976798   66641 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.976805   66641 system_pods.go:89] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.976809   66641 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.976818   66641 system_pods.go:89] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.976825   66641 system_pods.go:89] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.976832   66641 system_pods.go:126] duration metric: took 5.152102ms to wait for k8s-apps to be running ...
	I0723 15:25:40.976838   66641 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.976875   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.996951   66641 system_svc.go:56] duration metric: took 20.10286ms WaitForService to wait for kubelet
	I0723 15:25:40.996983   66641 kubeadm.go:582] duration metric: took 4m24.133944078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.997007   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.999958   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.999980   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.999991   66641 node_conditions.go:105] duration metric: took 2.97868ms to run NodePressure ...
	I0723 15:25:41.000002   66641 start.go:241] waiting for startup goroutines ...
	I0723 15:25:41.000008   66641 start.go:246] waiting for cluster config update ...
	I0723 15:25:41.000017   66641 start.go:255] writing updated cluster config ...
	I0723 15:25:41.000292   66641 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:41.058447   66641 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:41.060584   66641 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-911217" cluster and "default" namespace by default
	I0723 15:25:40.652692   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:42.653402   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:44.653499   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:47.153167   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:49.652723   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:51.653106   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:54.152382   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.153666   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.652308   64842 pod_ready.go:81] duration metric: took 4m0.005573507s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:56.652340   64842 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:56.652348   64842 pod_ready.go:38] duration metric: took 4m3.607231702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:56.652364   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:56.652389   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:56.652432   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:56.709002   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:56.709024   64842 cri.go:89] found id: ""
	I0723 15:25:56.709031   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:25:56.709076   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.713436   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:56.713496   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:56.748180   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:56.748203   64842 cri.go:89] found id: ""
	I0723 15:25:56.748212   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:25:56.748267   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.753878   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:56.753950   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:56.790420   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:56.790443   64842 cri.go:89] found id: ""
	I0723 15:25:56.790450   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:25:56.790503   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.794360   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:56.794430   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:56.833056   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:56.833084   64842 cri.go:89] found id: ""
	I0723 15:25:56.833093   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:25:56.833158   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.838040   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:56.838097   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:56.877548   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:56.877569   64842 cri.go:89] found id: ""
	I0723 15:25:56.877576   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:25:56.877620   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.881682   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:56.881754   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:56.931794   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:56.931821   64842 cri.go:89] found id: ""
	I0723 15:25:56.931831   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:25:56.931903   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.936454   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:56.936529   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:56.974347   64842 cri.go:89] found id: ""
	I0723 15:25:56.974373   64842 logs.go:276] 0 containers: []
	W0723 15:25:56.974401   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:56.974411   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:56.974595   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:57.008960   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:57.008986   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:25:57.008990   64842 cri.go:89] found id: ""
	I0723 15:25:57.008997   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:25:57.009044   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.013403   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.017022   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:57.017041   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:57.031010   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:57.031038   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:57.162515   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:25:57.162548   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:57.202805   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:25:57.202840   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:57.238593   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:57.238622   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:57.740811   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:25:57.740854   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:57.786125   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:57.786154   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:57.839346   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:25:57.839389   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:57.885507   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:25:57.885545   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:57.923025   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:25:57.923058   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:57.961082   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:25:57.961112   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:58.013561   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:25:58.013602   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:58.051695   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:25:58.051733   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.585802   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:26:00.601135   64842 api_server.go:72] duration metric: took 4m14.792155211s to wait for apiserver process to appear ...
	I0723 15:26:00.601167   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:26:00.601210   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:00.601269   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:00.641653   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:00.641678   64842 cri.go:89] found id: ""
	I0723 15:26:00.641687   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:00.641751   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.645831   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:00.645886   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:00.684737   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:00.684763   64842 cri.go:89] found id: ""
	I0723 15:26:00.684773   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:00.684836   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.689094   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:00.689140   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:00.725761   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:00.725787   64842 cri.go:89] found id: ""
	I0723 15:26:00.725795   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:00.725838   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.729843   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:00.729928   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:00.769870   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:00.769890   64842 cri.go:89] found id: ""
	I0723 15:26:00.769897   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:00.769942   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.774178   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:00.774235   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:00.816236   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:00.816261   64842 cri.go:89] found id: ""
	I0723 15:26:00.816268   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:00.816315   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.820577   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:00.820632   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:00.866824   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:00.866849   64842 cri.go:89] found id: ""
	I0723 15:26:00.866857   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:00.866910   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.871035   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:00.871089   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:00.913991   64842 cri.go:89] found id: ""
	I0723 15:26:00.914020   64842 logs.go:276] 0 containers: []
	W0723 15:26:00.914029   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:00.914035   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:00.914091   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:00.954766   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:00.954789   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.954795   64842 cri.go:89] found id: ""
	I0723 15:26:00.954804   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:00.954855   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.959067   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.962784   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:00.962807   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.998749   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:00.998781   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:01.454863   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:01.454902   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:01.505800   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:01.505829   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:01.555977   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:01.556008   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:01.591914   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:01.591942   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:01.649054   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:01.649083   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:01.682090   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:01.682116   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:01.721805   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:01.721832   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:01.758403   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:01.758432   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:01.808766   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:01.808803   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:01.823556   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:01.823589   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:01.936323   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:01.936355   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.478126   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:26:04.483667   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:26:04.484710   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:26:04.484730   64842 api_server.go:131] duration metric: took 3.883557615s to wait for apiserver health ...
	I0723 15:26:04.484737   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:26:04.484759   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:04.484810   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:04.522732   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:04.522757   64842 cri.go:89] found id: ""
	I0723 15:26:04.522766   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:04.522825   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.526922   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:04.526986   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:04.572736   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.572761   64842 cri.go:89] found id: ""
	I0723 15:26:04.572770   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:04.572828   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.576911   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:04.576966   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:04.612283   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.612310   64842 cri.go:89] found id: ""
	I0723 15:26:04.612318   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:04.612367   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.616609   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:04.616660   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:04.653775   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:04.653800   64842 cri.go:89] found id: ""
	I0723 15:26:04.653808   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:04.653883   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.658242   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:04.658298   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:04.699132   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.699155   64842 cri.go:89] found id: ""
	I0723 15:26:04.699164   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:04.699225   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.703672   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:04.703735   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:04.740522   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:04.740541   64842 cri.go:89] found id: ""
	I0723 15:26:04.740548   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:04.740605   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.745065   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:04.745134   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:04.779209   64842 cri.go:89] found id: ""
	I0723 15:26:04.779234   64842 logs.go:276] 0 containers: []
	W0723 15:26:04.779242   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:04.779255   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:04.779321   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:04.816696   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.816713   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:04.816718   64842 cri.go:89] found id: ""
	I0723 15:26:04.816728   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:04.816777   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.820775   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.824335   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:04.824362   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.865073   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:04.865105   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.903588   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:04.903617   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.939994   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:04.940022   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.976373   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:04.976402   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:05.355834   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:05.355877   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:05.410198   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:05.410228   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:05.424358   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:05.424391   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:05.464494   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:05.464526   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:05.496709   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:05.496736   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:05.534919   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:05.534959   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:05.640875   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:05.640913   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:05.678050   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:05.678078   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:08.236070   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:26:08.236336   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.236346   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.236351   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.236354   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.236357   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.236360   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.236368   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.236376   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.236382   64842 system_pods.go:74] duration metric: took 3.751640289s to wait for pod list to return data ...
	I0723 15:26:08.236391   64842 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:26:08.239339   64842 default_sa.go:45] found service account: "default"
	I0723 15:26:08.239367   64842 default_sa.go:55] duration metric: took 2.96931ms for default service account to be created ...
	I0723 15:26:08.239378   64842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:26:08.244406   64842 system_pods.go:86] 8 kube-system pods found
	I0723 15:26:08.244432   64842 system_pods.go:89] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.244438   64842 system_pods.go:89] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.244442   64842 system_pods.go:89] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.244447   64842 system_pods.go:89] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.244451   64842 system_pods.go:89] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.244455   64842 system_pods.go:89] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.244462   64842 system_pods.go:89] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.244468   64842 system_pods.go:89] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.244474   64842 system_pods.go:126] duration metric: took 5.091237ms to wait for k8s-apps to be running ...
	I0723 15:26:08.244481   64842 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:26:08.244521   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:08.260574   64842 system_svc.go:56] duration metric: took 16.083672ms WaitForService to wait for kubelet
	I0723 15:26:08.260610   64842 kubeadm.go:582] duration metric: took 4m22.451635049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:26:08.260634   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:26:08.263927   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:26:08.263954   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:26:08.263966   64842 node_conditions.go:105] duration metric: took 3.324706ms to run NodePressure ...
	I0723 15:26:08.263977   64842 start.go:241] waiting for startup goroutines ...
	I0723 15:26:08.263983   64842 start.go:246] waiting for cluster config update ...
	I0723 15:26:08.263992   64842 start.go:255] writing updated cluster config ...
	I0723 15:26:08.264250   64842 ssh_runner.go:195] Run: rm -f paused
	I0723 15:26:08.312776   64842 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0723 15:26:08.315009   64842 out.go:177] * Done! kubectl is now configured to use "no-preload-543029" cluster and "default" namespace by default
	I0723 15:26:54.925074   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:26:54.925180   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:26:54.926872   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:54.926940   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:54.927022   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:54.927137   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:54.927252   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:54.927339   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:54.929261   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:54.929337   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:54.929399   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:54.929472   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:54.929580   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:54.929678   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:54.929758   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:54.929836   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:54.929924   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:54.930026   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:54.930118   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:54.930165   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:54.930210   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:54.930257   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:54.930300   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:54.930371   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:54.930438   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:54.930535   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:54.930631   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:54.930663   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:54.930752   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:54.932218   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:54.932344   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:54.932445   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:54.932537   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:54.932653   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:54.932869   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:26:54.932943   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:26:54.933025   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933337   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933600   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933701   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933890   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933995   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934331   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934535   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934546   65605 kubeadm.go:310] 
	I0723 15:26:54.934600   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:26:54.934663   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:26:54.934673   65605 kubeadm.go:310] 
	I0723 15:26:54.934723   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:26:54.934762   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:26:54.934848   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:26:54.934855   65605 kubeadm.go:310] 
	I0723 15:26:54.934948   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:26:54.934979   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:26:54.935026   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:26:54.935034   65605 kubeadm.go:310] 
	I0723 15:26:54.935136   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:26:54.935255   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:26:54.935265   65605 kubeadm.go:310] 
	I0723 15:26:54.935410   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:26:54.935519   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:26:54.935578   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:26:54.935637   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:26:54.935693   65605 kubeadm.go:310] 
	W0723 15:26:54.935756   65605 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:26:54.935811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:26:55.388601   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:55.402519   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:26:55.412031   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:26:55.412054   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:26:55.412097   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:26:55.423092   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:26:55.423146   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:26:55.432321   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:26:55.441379   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:26:55.441447   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:26:55.450733   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.459263   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:26:55.459333   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.468488   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:26:55.477223   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:26:55.477277   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:26:55.485924   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:26:55.555024   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:55.555097   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:55.695658   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:55.695814   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:55.695939   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:55.867103   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:55.870203   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:55.870299   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:55.870407   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:55.870490   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:55.870568   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:55.870655   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:55.870733   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:55.870813   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:55.870861   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:55.870920   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:55.870985   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:55.871016   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:55.871063   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:55.963452   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:56.554450   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:57.109698   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:57.223533   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:57.243368   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:57.244331   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:57.244378   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:57.375340   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:57.377119   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:57.377234   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:57.386697   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:57.388552   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:57.389505   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:57.391792   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:27:37.394425   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:27:37.394534   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:37.394766   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:42.395393   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:42.395663   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:52.395847   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:52.396071   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:12.396192   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:12.396413   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395047   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:52.395369   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395384   65605 kubeadm.go:310] 
	I0723 15:28:52.395457   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:28:52.395531   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:28:52.395542   65605 kubeadm.go:310] 
	I0723 15:28:52.395588   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:28:52.395619   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:28:52.395780   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:28:52.395809   65605 kubeadm.go:310] 
	I0723 15:28:52.395964   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:28:52.396028   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:28:52.396084   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:28:52.396095   65605 kubeadm.go:310] 
	I0723 15:28:52.396194   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:28:52.396276   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:28:52.396286   65605 kubeadm.go:310] 
	I0723 15:28:52.396449   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:28:52.396552   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:28:52.396649   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:28:52.396744   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:28:52.396752   65605 kubeadm.go:310] 
	I0723 15:28:52.397220   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:28:52.397322   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:28:52.397397   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:28:52.397473   65605 kubeadm.go:394] duration metric: took 8m2.354906945s to StartCluster
	I0723 15:28:52.397516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:28:52.397573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:28:52.442298   65605 cri.go:89] found id: ""
	I0723 15:28:52.442328   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.442339   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:28:52.442347   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:28:52.442422   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:28:52.476108   65605 cri.go:89] found id: ""
	I0723 15:28:52.476131   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.476138   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:28:52.476144   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:28:52.476205   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:28:52.511118   65605 cri.go:89] found id: ""
	I0723 15:28:52.511143   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.511152   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:28:52.511159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:28:52.511224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:28:52.544901   65605 cri.go:89] found id: ""
	I0723 15:28:52.544934   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.544946   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:28:52.544954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:28:52.545020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:28:52.580472   65605 cri.go:89] found id: ""
	I0723 15:28:52.580494   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.580501   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:28:52.580515   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:28:52.580577   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:28:52.613777   65605 cri.go:89] found id: ""
	I0723 15:28:52.613808   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.613818   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:28:52.613826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:28:52.613894   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:28:52.650831   65605 cri.go:89] found id: ""
	I0723 15:28:52.650961   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.650974   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:28:52.650982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:28:52.651048   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:28:52.684805   65605 cri.go:89] found id: ""
	I0723 15:28:52.684833   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.684845   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:28:52.684857   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:28:52.684873   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:28:52.787532   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:28:52.787583   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:28:52.843947   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:28:52.843979   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:28:52.894679   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:28:52.894714   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:28:52.910794   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:28:52.910821   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:28:52.989285   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0723 15:28:52.989325   65605 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:28:52.989368   65605 out.go:239] * 
	W0723 15:28:52.989432   65605 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.989465   65605 out.go:239] * 
	W0723 15:28:52.990350   65605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:28:52.993770   65605 out.go:177] 
	W0723 15:28:52.995023   65605 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.995076   65605 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:28:52.995095   65605 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:28:52.996528   65605 out.go:177] 
	
	
	==> CRI-O <==
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.896769056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748534896744345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=896545d8-2b38-467b-8db7-9188598e6b97 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.897490575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=346f96bb-0a2c-4638-b6b3-4f51fee6bfa5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.897545454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=346f96bb-0a2c-4638-b6b3-4f51fee6bfa5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.897579001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=346f96bb-0a2c-4638-b6b3-4f51fee6bfa5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.930777068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e094520-9bd3-4665-ab44-7c9c50962b36 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.930852400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e094520-9bd3-4665-ab44-7c9c50962b36 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.932102009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=783a2bb4-9926-4ed3-81ef-795f1a3f6f96 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.932577016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748534932550915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=783a2bb4-9926-4ed3-81ef-795f1a3f6f96 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.933160057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e29aa7ac-d36b-441c-89ca-151e35e2ca76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.933208841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e29aa7ac-d36b-441c-89ca-151e35e2ca76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.933238285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e29aa7ac-d36b-441c-89ca-151e35e2ca76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.966072981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd2f2736-f3a6-4d17-8805-96f028d09f6d name=/runtime.v1.RuntimeService/Version
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.966149431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd2f2736-f3a6-4d17-8805-96f028d09f6d name=/runtime.v1.RuntimeService/Version
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.967273584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a32fc2e0-8006-4f84-be14-25b9b8de2bfd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.967679407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748534967657469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a32fc2e0-8006-4f84-be14-25b9b8de2bfd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.968262239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4942f788-ef98-4a76-bdb3-508358aa1e7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.968318153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4942f788-ef98-4a76-bdb3-508358aa1e7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.968346745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4942f788-ef98-4a76-bdb3-508358aa1e7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.999622639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0dc5d151-6a95-4752-a240-460cba66fd2e name=/runtime.v1.RuntimeService/Version
	Jul 23 15:28:54 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:54.999736664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0dc5d151-6a95-4752-a240-460cba66fd2e name=/runtime.v1.RuntimeService/Version
	Jul 23 15:28:55 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:55.000916687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce136192-d36c-42c5-ac9e-0d027794430c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:28:55 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:55.001294112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748535001274255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce136192-d36c-42c5-ac9e-0d027794430c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:28:55 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:55.001789304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be9844ec-c50c-477a-9272-54a92869e9db name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:55 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:55.001850714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be9844ec-c50c-477a-9272-54a92869e9db name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:28:55 old-k8s-version-000272 crio[653]: time="2024-07-23 15:28:55.001884901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=be9844ec-c50c-477a-9272-54a92869e9db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul23 15:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051105] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.906859] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.937543] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.495630] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.117641] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061578] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.222393] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.111093] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.239582] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.000298] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.060522] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958927] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Jul23 15:21] kauditd_printk_skb: 46 callbacks suppressed
	[Jul23 15:24] systemd-fstab-generator[5081]: Ignoring "noauto" option for root device
	[Jul23 15:26] systemd-fstab-generator[5360]: Ignoring "noauto" option for root device
	[  +0.066445] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:28:55 up 8 min,  0 users,  load average: 0.00, 0.07, 0.05
	Linux old-k8s-version-000272 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0002601c0, 0xc000d1fb60, 0x1, 0x0, 0x0)
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00019afc0)
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: goroutine 147 [select]:
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0000506e0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000cf5ec0, 0x0, 0x0)
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00019afc0)
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 23 15:28:52 old-k8s-version-000272 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 23 15:28:52 old-k8s-version-000272 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 23 15:28:52 old-k8s-version-000272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 23 15:28:52 old-k8s-version-000272 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 23 15:28:52 old-k8s-version-000272 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5587]: I0723 15:28:52.882766    5587 server.go:416] Version: v1.20.0
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5587]: I0723 15:28:52.883138    5587 server.go:837] Client rotation is on, will bootstrap in background
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5587]: I0723 15:28:52.885248    5587 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5587]: W0723 15:28:52.886231    5587 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 23 15:28:52 old-k8s-version-000272 kubelet[5587]: I0723 15:28:52.886373    5587 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (218.068112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-000272" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (749.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217: exit status 3 (3.167454251s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:18:32.730764   66515 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.64:22: connect: no route to host
	E0723 15:18:32.730791   66515 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.64:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-911217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-911217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152652233s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.64:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-911217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217: exit status 3 (3.063250933s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:18:41.946841   66595 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.64:22: connect: no route to host
	E0723 15:18:41.946866   66595 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.64:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911217" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (545.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-486436 -n embed-certs-486436
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-23 15:34:41.526646513 +0000 UTC m=+5890.732391238
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-486436 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-486436 logs -n 25: (2.409373766s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-193974                              | stopped-upgrade-193974       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-543029             | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-543029                                   | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-486436            | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272        | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-518198 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | disable-driver-mounts-518198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:18:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:18:41.988416   66641 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:18:41.988512   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988520   66641 out.go:304] Setting ErrFile to fd 2...
	I0723 15:18:41.988525   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988683   66641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:18:41.989181   66641 out.go:298] Setting JSON to false
	I0723 15:18:41.990049   66641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7268,"bootTime":1721740654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:18:41.990101   66641 start.go:139] virtualization: kvm guest
	I0723 15:18:41.992106   66641 out.go:177] * [default-k8s-diff-port-911217] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:18:41.993366   66641 notify.go:220] Checking for updates...
	I0723 15:18:41.993387   66641 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:18:41.994650   66641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:18:41.995849   66641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:18:41.997045   66641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:18:41.998236   66641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:18:41.999412   66641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:18:42.001155   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:18:42.001533   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.001596   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.016186   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0723 15:18:42.016616   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.017209   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.017230   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.017528   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.017699   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.017927   66641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:18:42.018205   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.018235   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.032467   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0723 15:18:42.032800   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.033214   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.033236   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.033544   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.033718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.065773   66641 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:18:42.067127   66641 start.go:297] selected driver: kvm2
	I0723 15:18:42.067142   66641 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.067236   66641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:18:42.067871   66641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.067939   66641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:18:42.083220   66641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:18:42.083563   66641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:18:42.083627   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:18:42.083641   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:18:42.083677   66641 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.083772   66641 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.085608   66641 out.go:177] * Starting "default-k8s-diff-port-911217" primary control-plane node in "default-k8s-diff-port-911217" cluster
	I0723 15:18:42.394642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:42.086917   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:18:42.086954   66641 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:18:42.086961   66641 cache.go:56] Caching tarball of preloaded images
	I0723 15:18:42.087024   66641 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:18:42.087034   66641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:18:42.087125   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:18:42.087294   66641 start.go:360] acquireMachinesLock for default-k8s-diff-port-911217: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:18:45.466731   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:51.546673   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:54.618775   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:00.698667   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:03.770734   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:09.850627   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:12.922681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:19.002679   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:22.074678   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:28.154680   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:31.226704   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:37.306625   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:40.378652   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:46.458657   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:49.530693   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:55.610642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:58.682681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:20:01.686613   65177 start.go:364] duration metric: took 4m13.413067096s to acquireMachinesLock for "embed-certs-486436"
	I0723 15:20:01.686692   65177 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:01.686702   65177 fix.go:54] fixHost starting: 
	I0723 15:20:01.687041   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:01.687070   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:01.702700   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0723 15:20:01.703107   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:01.703623   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:20:01.703649   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:01.704019   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:01.704222   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:01.704417   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:20:01.706547   65177 fix.go:112] recreateIfNeeded on embed-certs-486436: state=Stopped err=<nil>
	I0723 15:20:01.706583   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	W0723 15:20:01.706810   65177 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:01.708411   65177 out.go:177] * Restarting existing kvm2 VM for "embed-certs-486436" ...
	I0723 15:20:01.709393   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Start
	I0723 15:20:01.709559   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring networks are active...
	I0723 15:20:01.710353   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network default is active
	I0723 15:20:01.710733   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network mk-embed-certs-486436 is active
	I0723 15:20:01.711060   65177 main.go:141] libmachine: (embed-certs-486436) Getting domain xml...
	I0723 15:20:01.711832   65177 main.go:141] libmachine: (embed-certs-486436) Creating domain...
	I0723 15:20:02.915930   65177 main.go:141] libmachine: (embed-certs-486436) Waiting to get IP...
	I0723 15:20:02.916770   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:02.917115   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:02.917188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:02.917097   66959 retry.go:31] will retry after 245.483954ms: waiting for machine to come up
	I0723 15:20:01.683920   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:01.683992   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684333   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:20:01.684360   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684537   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:20:01.686489   64842 machine.go:97] duration metric: took 4m34.539799868s to provisionDockerMachine
	I0723 15:20:01.686530   64842 fix.go:56] duration metric: took 4m34.563243323s for fixHost
	I0723 15:20:01.686547   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 4m34.563294357s
	W0723 15:20:01.686572   64842 start.go:714] error starting host: provision: host is not running
	W0723 15:20:01.686657   64842 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0723 15:20:01.686668   64842 start.go:729] Will try again in 5 seconds ...
	I0723 15:20:03.164587   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.165021   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.165067   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.164972   66959 retry.go:31] will retry after 387.950176ms: waiting for machine to come up
	I0723 15:20:03.554705   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.555161   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.555188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.555103   66959 retry.go:31] will retry after 404.807138ms: waiting for machine to come up
	I0723 15:20:03.961830   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.962290   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.962323   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.962236   66959 retry.go:31] will retry after 570.61318ms: waiting for machine to come up
	I0723 15:20:04.534152   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:04.534702   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:04.534731   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:04.534650   66959 retry.go:31] will retry after 542.857217ms: waiting for machine to come up
	I0723 15:20:05.079445   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.079866   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.079894   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.079811   66959 retry.go:31] will retry after 653.88428ms: waiting for machine to come up
	I0723 15:20:05.735919   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.736350   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.736381   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.736331   66959 retry.go:31] will retry after 871.798617ms: waiting for machine to come up
	I0723 15:20:06.609428   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:06.609885   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:06.609908   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:06.609854   66959 retry.go:31] will retry after 1.079464189s: waiting for machine to come up
	I0723 15:20:07.690706   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:07.691096   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:07.691122   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:07.691070   66959 retry.go:31] will retry after 1.414145571s: waiting for machine to come up
	I0723 15:20:06.687299   64842 start.go:360] acquireMachinesLock for no-preload-543029: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:20:09.107698   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:09.108062   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:09.108091   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:09.108012   66959 retry.go:31] will retry after 2.263313118s: waiting for machine to come up
	I0723 15:20:11.374573   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:11.375009   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:11.375035   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:11.374970   66959 retry.go:31] will retry after 2.600297505s: waiting for machine to come up
	I0723 15:20:13.978265   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:13.978707   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:13.978733   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:13.978653   66959 retry.go:31] will retry after 2.515380756s: waiting for machine to come up
	I0723 15:20:16.497458   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:16.497913   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:16.497945   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:16.497872   66959 retry.go:31] will retry after 3.863044954s: waiting for machine to come up
	I0723 15:20:21.587107   65605 start.go:364] duration metric: took 3m54.633068774s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:20:21.587168   65605 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:21.587179   65605 fix.go:54] fixHost starting: 
	I0723 15:20:21.587596   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:21.587632   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:21.608083   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0723 15:20:21.608563   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:21.609109   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:20:21.609148   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:21.609463   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:21.609679   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:21.609839   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:20:21.611555   65605 fix.go:112] recreateIfNeeded on old-k8s-version-000272: state=Stopped err=<nil>
	I0723 15:20:21.611590   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	W0723 15:20:21.611766   65605 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:21.614168   65605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	I0723 15:20:21.615607   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .Start
	I0723 15:20:21.615831   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:20:21.616640   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:20:21.617122   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:20:21.617591   65605 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:20:21.618346   65605 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:20:20.365141   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365653   65177 main.go:141] libmachine: (embed-certs-486436) Found IP for machine: 192.168.39.200
	I0723 15:20:20.365671   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365677   65177 main.go:141] libmachine: (embed-certs-486436) Reserving static IP address...
	I0723 15:20:20.366319   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.366340   65177 main.go:141] libmachine: (embed-certs-486436) DBG | skip adding static IP to network mk-embed-certs-486436 - found existing host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"}
	I0723 15:20:20.366351   65177 main.go:141] libmachine: (embed-certs-486436) Reserved static IP address: 192.168.39.200
	I0723 15:20:20.366360   65177 main.go:141] libmachine: (embed-certs-486436) Waiting for SSH to be available...
	I0723 15:20:20.366367   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Getting to WaitForSSH function...
	I0723 15:20:20.368870   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369217   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.369239   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369431   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH client type: external
	I0723 15:20:20.369462   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa (-rw-------)
	I0723 15:20:20.369485   65177 main.go:141] libmachine: (embed-certs-486436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:20.369495   65177 main.go:141] libmachine: (embed-certs-486436) DBG | About to run SSH command:
	I0723 15:20:20.369505   65177 main.go:141] libmachine: (embed-certs-486436) DBG | exit 0
	I0723 15:20:20.494158   65177 main.go:141] libmachine: (embed-certs-486436) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:20.494591   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetConfigRaw
	I0723 15:20:20.495255   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.497821   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498094   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.498124   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498346   65177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/config.json ...
	I0723 15:20:20.498558   65177 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:20.498577   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:20.498808   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.500819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501138   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.501166   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.501481   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501643   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501770   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.501926   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.502215   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.502231   65177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:20.606234   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:20.606264   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606556   65177 buildroot.go:166] provisioning hostname "embed-certs-486436"
	I0723 15:20:20.606598   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606793   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.609446   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609801   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.609838   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609990   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.610137   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610468   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.610650   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.610813   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.610825   65177 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-486436 && echo "embed-certs-486436" | sudo tee /etc/hostname
	I0723 15:20:20.727215   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-486436
	
	I0723 15:20:20.727239   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.730058   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730363   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.730411   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730552   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.730741   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.730911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.731048   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.731204   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.731364   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.731380   65177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-486436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-486436/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-486436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:20.844079   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:20.844109   65177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:20.844128   65177 buildroot.go:174] setting up certificates
	I0723 15:20:20.844135   65177 provision.go:84] configureAuth start
	I0723 15:20:20.844145   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.844400   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.846867   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847192   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.847220   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847342   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.849457   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849786   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.849829   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849937   65177 provision.go:143] copyHostCerts
	I0723 15:20:20.849992   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:20.850002   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:20.850068   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:20.850164   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:20.850172   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:20.850201   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:20.850263   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:20.850272   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:20.850293   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:20.850358   65177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.embed-certs-486436 san=[127.0.0.1 192.168.39.200 embed-certs-486436 localhost minikube]
	I0723 15:20:20.945454   65177 provision.go:177] copyRemoteCerts
	I0723 15:20:20.945511   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:20.945536   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.948316   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948605   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.948639   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948797   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.948981   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.949142   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.949267   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.032367   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:20:21.054529   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:21.076049   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 15:20:21.098274   65177 provision.go:87] duration metric: took 254.126202ms to configureAuth
	I0723 15:20:21.098303   65177 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:21.098510   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:20:21.098600   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.100971   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101307   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.101341   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101520   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.101687   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.101828   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.102031   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.102187   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.102375   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.102418   65177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:21.359179   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:21.359214   65177 machine.go:97] duration metric: took 860.640697ms to provisionDockerMachine
	I0723 15:20:21.359230   65177 start.go:293] postStartSetup for "embed-certs-486436" (driver="kvm2")
	I0723 15:20:21.359244   65177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:21.359265   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.359777   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:21.359804   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.362611   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.362936   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.362963   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.363138   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.363311   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.363497   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.363669   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.444572   65177 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:21.448633   65177 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:21.448662   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:21.448733   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:21.448817   65177 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:21.448925   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:21.457699   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:21.480387   65177 start.go:296] duration metric: took 121.140622ms for postStartSetup
	I0723 15:20:21.480431   65177 fix.go:56] duration metric: took 19.793728867s for fixHost
	I0723 15:20:21.480449   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.483324   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483667   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.483690   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483854   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.484057   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484211   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484353   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.484516   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.484692   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.484703   65177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:21.586960   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748021.563549452
	
	I0723 15:20:21.586982   65177 fix.go:216] guest clock: 1721748021.563549452
	I0723 15:20:21.586989   65177 fix.go:229] Guest: 2024-07-23 15:20:21.563549452 +0000 UTC Remote: 2024-07-23 15:20:21.480435025 +0000 UTC m=+273.351160165 (delta=83.114427ms)
	I0723 15:20:21.587010   65177 fix.go:200] guest clock delta is within tolerance: 83.114427ms
	I0723 15:20:21.587016   65177 start.go:83] releasing machines lock for "embed-certs-486436", held for 19.900344761s
	I0723 15:20:21.587045   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.587363   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:21.590600   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.590998   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.591041   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.591194   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591723   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591965   65177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:21.592024   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.592172   65177 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:21.592190   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.594877   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595266   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595337   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595387   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595502   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595698   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.595751   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595776   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595837   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.595909   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595998   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.596083   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.596218   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.596369   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.709871   65177 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:21.717210   65177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:21.866461   65177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:21.871904   65177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:21.871979   65177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:21.888197   65177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:21.888226   65177 start.go:495] detecting cgroup driver to use...
	I0723 15:20:21.888339   65177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:21.903857   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:21.917841   65177 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:21.917917   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:21.935814   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:21.949898   65177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:22.066137   65177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:22.208517   65177 docker.go:233] disabling docker service ...
	I0723 15:20:22.208606   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:22.222583   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:22.235322   65177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:22.380324   65177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:22.513404   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:22.529676   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:22.546980   65177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:20:22.547050   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.556656   65177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:22.556723   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.566410   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.576269   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.586125   65177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:22.597824   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.608136   65177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.628391   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.642862   65177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:22.652564   65177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:22.652625   65177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:22.667485   65177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:22.677669   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:22.809762   65177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:22.947870   65177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:22.947955   65177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:22.952570   65177 start.go:563] Will wait 60s for crictl version
	I0723 15:20:22.952672   65177 ssh_runner.go:195] Run: which crictl
	I0723 15:20:22.956658   65177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:22.997591   65177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:22.997719   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.030830   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.060406   65177 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:20:23.061617   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:23.065154   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065547   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:23.065572   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065845   65177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:23.070019   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:23.082226   65177 kubeadm.go:883] updating cluster {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:23.082414   65177 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:20:23.082490   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:23.117427   65177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:20:23.117505   65177 ssh_runner.go:195] Run: which lz4
	I0723 15:20:23.121380   65177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:23.125694   65177 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:23.125721   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:20:22.904910   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:20:22.905969   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:22.906448   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:22.906508   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:22.906424   67094 retry.go:31] will retry after 215.638875ms: waiting for machine to come up
	I0723 15:20:23.124008   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.124474   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.124510   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.124440   67094 retry.go:31] will retry after 380.753429ms: waiting for machine to come up
	I0723 15:20:23.507362   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.507777   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.507803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.507744   67094 retry.go:31] will retry after 385.253161ms: waiting for machine to come up
	I0723 15:20:23.894227   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.894675   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.894697   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.894627   67094 retry.go:31] will retry after 533.715559ms: waiting for machine to come up
	I0723 15:20:24.429811   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:24.430290   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:24.430321   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:24.430242   67094 retry.go:31] will retry after 637.033889ms: waiting for machine to come up
	I0723 15:20:25.068770   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.069313   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.069345   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.069274   67094 retry.go:31] will retry after 796.484567ms: waiting for machine to come up
	I0723 15:20:25.867223   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.867663   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.867693   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.867604   67094 retry.go:31] will retry after 845.920319ms: waiting for machine to come up
	I0723 15:20:26.715077   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:26.715612   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:26.715643   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:26.715566   67094 retry.go:31] will retry after 1.265268276s: waiting for machine to come up
	I0723 15:20:24.399306   65177 crio.go:462] duration metric: took 1.277970642s to copy over tarball
	I0723 15:20:24.399409   65177 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:26.603797   65177 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204354868s)
	I0723 15:20:26.603830   65177 crio.go:469] duration metric: took 2.204493799s to extract the tarball
	I0723 15:20:26.603839   65177 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:26.641498   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:26.682771   65177 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:20:26.682793   65177 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:20:26.682802   65177 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0723 15:20:26.682948   65177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-486436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:26.683021   65177 ssh_runner.go:195] Run: crio config
	I0723 15:20:26.734908   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:26.734934   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:26.734947   65177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:26.734979   65177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-486436 NodeName:embed-certs-486436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:20:26.735162   65177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-486436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:26.735247   65177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:20:26.746266   65177 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:26.746334   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:26.756387   65177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0723 15:20:26.771870   65177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:26.789639   65177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0723 15:20:26.807608   65177 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:26.811134   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:26.823851   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:26.952899   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:26.969453   65177 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436 for IP: 192.168.39.200
	I0723 15:20:26.969484   65177 certs.go:194] generating shared ca certs ...
	I0723 15:20:26.969503   65177 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:26.969694   65177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:26.969757   65177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:26.969770   65177 certs.go:256] generating profile certs ...
	I0723 15:20:26.969897   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/client.key
	I0723 15:20:26.969978   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key.8481dffb
	I0723 15:20:26.970038   65177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key
	I0723 15:20:26.970164   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:26.970203   65177 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:26.970216   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:26.970255   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:26.970279   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:26.970309   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:26.970369   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:26.971269   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:27.026302   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:27.075563   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:27.109194   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:27.136748   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0723 15:20:27.159391   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:20:27.181933   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:27.203549   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:27.225473   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:27.254497   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:27.275874   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:27.299275   65177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:27.316223   65177 ssh_runner.go:195] Run: openssl version
	I0723 15:20:27.322037   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:27.333546   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337890   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337945   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.343624   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:27.354738   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:27.365915   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370038   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370101   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.375514   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:27.386502   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:27.396611   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400879   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400978   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.406132   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:27.415738   65177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:27.419755   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:27.424982   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:27.430277   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:27.435794   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:27.441244   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:27.446515   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:27.451968   65177 kubeadm.go:392] StartCluster: {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:27.452053   65177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:27.452102   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.488671   65177 cri.go:89] found id: ""
	I0723 15:20:27.488758   65177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:27.498621   65177 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:27.498639   65177 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:27.498690   65177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:27.510485   65177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:27.511796   65177 kubeconfig.go:125] found "embed-certs-486436" server: "https://192.168.39.200:8443"
	I0723 15:20:27.513749   65177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:27.525206   65177 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I0723 15:20:27.525258   65177 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:27.525275   65177 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:27.525354   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.563337   65177 cri.go:89] found id: ""
	I0723 15:20:27.563411   65177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:27.583886   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:27.595493   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:27.595513   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:27.595591   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:27.606537   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:27.606596   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:27.616130   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:27.624277   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:27.624335   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:27.632787   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.641057   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:27.641113   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.649516   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:27.657977   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:27.658021   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:27.666489   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:27.675023   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.777750   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.982818   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:27.983136   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:27.983157   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:27.983112   67094 retry.go:31] will retry after 1.681215174s: waiting for machine to come up
	I0723 15:20:29.667369   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:29.667816   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:29.667846   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:29.667773   67094 retry.go:31] will retry after 1.742302977s: waiting for machine to come up
	I0723 15:20:31.412567   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:31.413046   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:31.413074   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:31.412990   67094 retry.go:31] will retry after 2.618033682s: waiting for machine to come up
	I0723 15:20:28.659756   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.867793   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.952107   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:29.020498   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:29.020632   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:29.521001   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.021488   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.520765   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.021749   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.521145   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.535745   65177 api_server.go:72] duration metric: took 2.515246955s to wait for apiserver process to appear ...
	I0723 15:20:31.535779   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:20:31.535802   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.561351   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.561400   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:33.561416   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.580699   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.580735   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:34.036231   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.045563   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.045603   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:34.536119   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.549417   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.549447   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:35.035956   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:35.040331   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:20:35.046883   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:20:35.046909   65177 api_server.go:131] duration metric: took 3.511123729s to wait for apiserver health ...
	I0723 15:20:35.046918   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:35.046924   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:35.048858   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:20:34.034295   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:34.034660   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:34.034682   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:34.034634   67094 retry.go:31] will retry after 2.832404848s: waiting for machine to come up
	I0723 15:20:35.050411   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:20:35.061924   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:20:35.088990   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:20:35.102746   65177 system_pods.go:59] 8 kube-system pods found
	I0723 15:20:35.102778   65177 system_pods.go:61] "coredns-7db6d8ff4d-v842j" [f3509de1-edf7-46c4-af5b-89338770d2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:20:35.102786   65177 system_pods.go:61] "etcd-embed-certs-486436" [46b72abd-c16d-452d-8c17-909fd2a25fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:20:35.102796   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [2ce2344f-5ddc-438b-8f16-338bc266da83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:20:35.102804   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [3f483328-583f-4c71-8372-db418f593b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:20:35.102812   65177 system_pods.go:61] "kube-proxy-f4vfh" [00e430df-ccc5-463d-96f9-288e2e611e2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:20:35.102822   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [0c581c3d-78ab-47d8-81a8-9d176192a94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:20:35.102829   65177 system_pods.go:61] "metrics-server-569cc877fc-rq67z" [b6371591-2fac-47f5-b20b-635c9f0755c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:20:35.102840   65177 system_pods.go:61] "storage-provisioner" [a0545674-2bfc-48b4-940e-cdedf02c5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:20:35.102849   65177 system_pods.go:74] duration metric: took 13.834305ms to wait for pod list to return data ...
	I0723 15:20:35.102857   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:20:35.106953   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:20:35.106977   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:20:35.106991   65177 node_conditions.go:105] duration metric: took 4.127613ms to run NodePressure ...
	I0723 15:20:35.107010   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:35.395355   65177 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399496   65177 kubeadm.go:739] kubelet initialised
	I0723 15:20:35.399514   65177 kubeadm.go:740] duration metric: took 4.133847ms waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399521   65177 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:20:35.404293   65177 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.408404   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408423   65177 pod_ready.go:81] duration metric: took 4.111276ms for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.408431   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408440   65177 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.412361   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412379   65177 pod_ready.go:81] duration metric: took 3.929729ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.412391   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412403   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.416588   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416603   65177 pod_ready.go:81] duration metric: took 4.193735ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.416610   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416616   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.492691   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492715   65177 pod_ready.go:81] duration metric: took 76.092496ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.492724   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492731   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892820   65177 pod_ready.go:92] pod "kube-proxy-f4vfh" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:35.892843   65177 pod_ready.go:81] duration metric: took 400.103193ms for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892853   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:37.898159   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:36.869147   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:36.869555   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:36.869593   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:36.869499   67094 retry.go:31] will retry after 4.334096738s: waiting for machine to come up
	I0723 15:20:41.208992   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209340   65605 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:20:41.209364   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:20:41.209382   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209808   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.209843   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | skip adding static IP to network mk-old-k8s-version-000272 - found existing host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"}
	I0723 15:20:41.209862   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:20:41.209878   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:20:41.209916   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:20:41.211671   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.211918   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.211956   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.212110   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:20:41.212139   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:20:41.212191   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:41.212211   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:20:41.212229   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:20:41.334852   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:41.335260   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:20:41.335965   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.338425   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.338803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.338842   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.339024   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:20:41.339218   65605 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:41.339235   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:41.339476   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.341528   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.341881   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.341909   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.342008   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.342192   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342352   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342502   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.342674   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.342855   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.342865   65605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:41.442564   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:41.442592   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.442857   65605 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:20:41.442872   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.443076   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.445976   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446389   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.446429   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446553   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.446719   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.446972   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.447096   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.447249   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.447418   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.447434   65605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:20:41.559708   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:20:41.559739   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.562630   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.562954   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.562977   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.563156   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.563340   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563501   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563596   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.563779   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.563977   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.564006   65605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:41.671327   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:41.671363   65605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:41.671396   65605 buildroot.go:174] setting up certificates
	I0723 15:20:41.671407   65605 provision.go:84] configureAuth start
	I0723 15:20:41.671418   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.671766   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.674340   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.674812   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.674848   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.675019   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.677052   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677386   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.677418   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677568   65605 provision.go:143] copyHostCerts
	I0723 15:20:41.677636   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:41.677651   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:41.677715   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:41.677826   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:41.677836   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:41.677866   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:41.677939   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:41.677949   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:41.677975   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:41.678039   65605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:20:41.745999   65605 provision.go:177] copyRemoteCerts
	I0723 15:20:41.746077   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:41.746123   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.748908   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749226   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.749252   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749417   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.749616   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.749771   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.749903   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:41.828867   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:42.386874   66641 start.go:364] duration metric: took 2m0.299552173s to acquireMachinesLock for "default-k8s-diff-port-911217"
	I0723 15:20:42.386943   66641 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:42.386951   66641 fix.go:54] fixHost starting: 
	I0723 15:20:42.387316   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:42.387356   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:42.405492   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0723 15:20:42.405947   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:42.406493   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:20:42.406517   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:42.406843   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:42.407031   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:20:42.407169   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:20:42.408621   66641 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911217: state=Stopped err=<nil>
	I0723 15:20:42.408657   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	W0723 15:20:42.408798   66641 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:42.410540   66641 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-911217" ...
	I0723 15:20:39.899515   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.903102   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.852296   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:20:41.874579   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:20:41.897065   65605 provision.go:87] duration metric: took 225.644058ms to configureAuth
	I0723 15:20:41.897095   65605 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:41.897287   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:20:41.897354   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.900232   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902335   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.902328   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.902412   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902623   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.902826   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.903015   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.903209   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.903388   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.903407   65605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:42.162998   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:42.163019   65605 machine.go:97] duration metric: took 823.789368ms to provisionDockerMachine
	I0723 15:20:42.163030   65605 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:20:42.163040   65605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:42.163054   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.163444   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:42.163471   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.166193   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.166628   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.166842   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.167037   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.167181   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.248364   65605 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:42.252403   65605 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:42.252433   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:42.252504   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:42.252596   65605 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:42.252693   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:42.262571   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:42.285115   65605 start.go:296] duration metric: took 122.072017ms for postStartSetup
	I0723 15:20:42.285160   65605 fix.go:56] duration metric: took 20.697977265s for fixHost
	I0723 15:20:42.285180   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.287760   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288032   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.288062   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288187   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.288428   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288606   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288799   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.289000   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:42.289216   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:42.289232   65605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:42.386682   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748042.363547028
	
	I0723 15:20:42.386711   65605 fix.go:216] guest clock: 1721748042.363547028
	I0723 15:20:42.386723   65605 fix.go:229] Guest: 2024-07-23 15:20:42.363547028 +0000 UTC Remote: 2024-07-23 15:20:42.285164316 +0000 UTC m=+255.470399434 (delta=78.382712ms)
	I0723 15:20:42.386754   65605 fix.go:200] guest clock delta is within tolerance: 78.382712ms
	I0723 15:20:42.386765   65605 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 20.799620907s
	I0723 15:20:42.386796   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.387067   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:42.390116   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390543   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.390589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390703   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391215   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391395   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391482   65605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:42.391527   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.391645   65605 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:42.391670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.394373   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394732   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.394757   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394924   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395081   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395245   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.395286   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.395331   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.395428   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.395579   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395726   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395963   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.396145   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.499940   65605 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:42.505917   65605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:42.646731   65605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:42.652550   65605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:42.652612   65605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:42.667337   65605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:42.667357   65605 start.go:495] detecting cgroup driver to use...
	I0723 15:20:42.667419   65605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:42.681839   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:42.694833   65605 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:42.694888   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:42.707800   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:42.720914   65605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:42.844082   65605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:43.024993   65605 docker.go:233] disabling docker service ...
	I0723 15:20:43.025076   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:43.057263   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:43.070881   65605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:43.180616   65605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:43.295769   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:43.311341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:43.333719   65605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:20:43.333787   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.345261   65605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:43.345364   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.356669   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.366947   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.378177   65605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:43.390672   65605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:43.400591   65605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:43.400645   65605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:43.413974   65605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:43.423528   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:43.545030   65605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:43.685902   65605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:43.686018   65605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:43.691692   65605 start.go:563] Will wait 60s for crictl version
	I0723 15:20:43.691742   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:43.695470   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:43.733229   65605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:43.733329   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.765591   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.794762   65605 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:20:43.796073   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:43.799075   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799549   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:43.799585   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799780   65605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:43.803604   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:43.818919   65605 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:43.819019   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:20:43.819073   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:43.872208   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:43.872268   65605 ssh_runner.go:195] Run: which lz4
	I0723 15:20:43.876273   65605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:43.880532   65605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:43.880566   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:20:45.299916   65605 crio.go:462] duration metric: took 1.423681931s to copy over tarball
	I0723 15:20:45.299989   65605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:42.411787   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Start
	I0723 15:20:42.411942   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring networks are active...
	I0723 15:20:42.412743   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network default is active
	I0723 15:20:42.413086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network mk-default-k8s-diff-port-911217 is active
	I0723 15:20:42.413500   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Getting domain xml...
	I0723 15:20:42.414312   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Creating domain...
	I0723 15:20:43.688063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting to get IP...
	I0723 15:20:43.689007   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689403   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689503   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.689396   67258 retry.go:31] will retry after 291.635723ms: waiting for machine to come up
	I0723 15:20:43.982895   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983344   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.983269   67258 retry.go:31] will retry after 315.035251ms: waiting for machine to come up
	I0723 15:20:44.300029   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300502   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300544   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.300453   67258 retry.go:31] will retry after 314.08729ms: waiting for machine to come up
	I0723 15:20:44.615873   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616274   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616299   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.616221   67258 retry.go:31] will retry after 424.738509ms: waiting for machine to come up
	I0723 15:20:45.042987   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043464   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043522   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.043438   67258 retry.go:31] will retry after 711.273362ms: waiting for machine to come up
	I0723 15:20:45.755790   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756332   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756366   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.756261   67258 retry.go:31] will retry after 880.333826ms: waiting for machine to come up
	I0723 15:20:46.638270   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638815   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638859   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:46.638766   67258 retry.go:31] will retry after 733.311982ms: waiting for machine to come up
	I0723 15:20:43.398761   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:43.398790   65177 pod_ready.go:81] duration metric: took 7.505930182s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:43.398803   65177 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:45.406572   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:47.406841   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:48.176598   65605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87658172s)
	I0723 15:20:48.176623   65605 crio.go:469] duration metric: took 2.876682557s to extract the tarball
	I0723 15:20:48.176632   65605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:48.221431   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:48.256729   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:48.256750   65605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:20:48.256833   65605 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.256883   65605 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.256906   65605 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.256840   65605 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.256896   65605 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.256841   65605 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.256851   65605 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:20:48.256858   65605 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258836   65605 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.258855   65605 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.258867   65605 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:20:48.258913   65605 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.258840   65605 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258841   65605 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.258842   65605 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.258906   65605 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.548121   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.552098   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.552418   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.560834   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.580417   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:20:48.590031   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.619770   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.633302   65605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:20:48.633365   65605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.633414   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.660305   65605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:20:48.660383   65605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.660439   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.691792   65605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:20:48.691853   65605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.691902   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707832   65605 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:20:48.707867   65605 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:20:48.707901   65605 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:20:48.707917   65605 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.707945   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707957   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.722912   65605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:20:48.722960   65605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.723012   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729754   65605 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:20:48.729792   65605 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.729820   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.729874   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.729826   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729827   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.730025   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:20:48.730037   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.730113   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.848335   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:20:48.849228   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.849310   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:20:48.858540   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:20:48.858650   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:20:48.858711   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:20:48.858750   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:20:48.889577   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:20:49.134808   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:49.273570   65605 cache_images.go:92] duration metric: took 1.016803126s to LoadCachedImages
	W0723 15:20:49.273670   65605 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0723 15:20:49.273686   65605 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:20:49.273808   65605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:49.273902   65605 ssh_runner.go:195] Run: crio config
	I0723 15:20:49.321968   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:20:49.321995   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:49.322007   65605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:49.322028   65605 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:20:49.322208   65605 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:49.322292   65605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:20:49.332563   65605 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:49.332636   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:49.345174   65605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:20:49.364369   65605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:49.379807   65605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:20:49.396643   65605 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:49.400437   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:49.412291   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:49.539360   65605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:49.556165   65605 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:20:49.556198   65605 certs.go:194] generating shared ca certs ...
	I0723 15:20:49.556218   65605 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:49.556393   65605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:49.556448   65605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:49.556457   65605 certs.go:256] generating profile certs ...
	I0723 15:20:49.556574   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:20:49.556652   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:20:49.556699   65605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:20:49.556845   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:49.556900   65605 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:49.556913   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:49.556947   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:49.557001   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:49.557036   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:49.557087   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:49.557993   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:49.605662   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:49.639122   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:49.665264   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:49.691008   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:20:49.723820   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:20:49.750608   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:49.776942   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:49.809923   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:49.834935   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:49.857389   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:49.880619   65605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:49.897369   65605 ssh_runner.go:195] Run: openssl version
	I0723 15:20:49.902878   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:49.913861   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918296   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918359   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.924159   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:49.936081   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:49.947674   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952040   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952090   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.957714   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:49.969333   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:49.981037   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985257   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985303   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.991083   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:50.002977   65605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:50.007497   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:50.013359   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:50.019202   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:50.025182   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:50.030979   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:50.036818   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:50.042573   65605 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:50.042687   65605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:50.042734   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.084635   65605 cri.go:89] found id: ""
	I0723 15:20:50.084714   65605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:50.096501   65605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:50.096521   65605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:50.096585   65605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:50.107443   65605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:50.108742   65605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:20:50.109665   65605 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-000272" cluster setting kubeconfig missing "old-k8s-version-000272" context setting]
	I0723 15:20:50.111089   65605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:50.178975   65605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:50.190920   65605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0723 15:20:50.190961   65605 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:50.190972   65605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:50.191033   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.230879   65605 cri.go:89] found id: ""
	I0723 15:20:50.230972   65605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:50.247994   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:50.257490   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:50.257518   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:50.257576   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:50.266704   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:50.266763   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:50.276276   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:50.285533   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:50.285613   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:50.294642   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.303358   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:50.303414   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.313060   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:50.322294   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:50.322364   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:50.331659   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:50.341120   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:50.460900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.327126   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.576244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.662730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.762087   65605 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:51.762179   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:47.373536   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374064   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374096   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:47.373991   67258 retry.go:31] will retry after 1.176593909s: waiting for machine to come up
	I0723 15:20:48.552701   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553183   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553216   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:48.553135   67258 retry.go:31] will retry after 1.485919187s: waiting for machine to come up
	I0723 15:20:50.040417   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040886   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:50.040808   67258 retry.go:31] will retry after 2.212005186s: waiting for machine to come up
	I0723 15:20:50.444583   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.905273   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.262683   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.763266   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.263151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.763313   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.262366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.763167   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.263068   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.762864   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.262305   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.762857   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.254679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255094   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:52.255018   67258 retry.go:31] will retry after 2.737596804s: waiting for machine to come up
	I0723 15:20:54.995373   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:54.995633   67258 retry.go:31] will retry after 2.363037622s: waiting for machine to come up
	I0723 15:20:55.405124   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:57.405898   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.767191   64842 start.go:364] duration metric: took 55.07978775s to acquireMachinesLock for "no-preload-543029"
	I0723 15:21:01.767250   64842 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:21:01.767261   64842 fix.go:54] fixHost starting: 
	I0723 15:21:01.767727   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:01.767763   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:01.785721   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0723 15:21:01.786113   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:01.786792   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:01.786819   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:01.787127   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:01.787328   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:01.787485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:01.789046   64842 fix.go:112] recreateIfNeeded on no-preload-543029: state=Stopped err=<nil>
	I0723 15:21:01.789080   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	W0723 15:21:01.789255   64842 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:21:01.791610   64842 out.go:177] * Restarting existing kvm2 VM for "no-preload-543029" ...
	I0723 15:20:57.263221   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.262445   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.762456   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.263288   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.763206   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.263158   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.762517   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.263183   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.762347   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.362159   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362567   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362593   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:57.362539   67258 retry.go:31] will retry after 2.888037123s: waiting for machine to come up
	I0723 15:21:00.253973   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.254583   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Found IP for machine: 192.168.61.64
	I0723 15:21:00.254603   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserving static IP address...
	I0723 15:21:00.254630   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has current primary IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.255048   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserved static IP address: 192.168.61.64
	I0723 15:21:00.255074   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for SSH to be available...
	I0723 15:21:00.255105   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.255130   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | skip adding static IP to network mk-default-k8s-diff-port-911217 - found existing host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"}
	I0723 15:21:00.255145   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Getting to WaitForSSH function...
	I0723 15:21:00.257683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258026   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.258054   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258147   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH client type: external
	I0723 15:21:00.258176   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa (-rw-------)
	I0723 15:21:00.258208   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:00.258220   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | About to run SSH command:
	I0723 15:21:00.258240   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | exit 0
	I0723 15:21:00.382323   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:00.382710   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetConfigRaw
	I0723 15:21:00.383397   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.386258   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.386718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386918   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:21:00.387143   66641 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:00.387164   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:00.387412   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.389494   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389798   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.389824   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389917   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.390082   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390237   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390438   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.390628   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.390842   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.390857   66641 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:00.486433   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:00.486468   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486725   66641 buildroot.go:166] provisioning hostname "default-k8s-diff-port-911217"
	I0723 15:21:00.486750   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486948   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.489770   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490120   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.490149   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490273   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.490475   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490671   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490882   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.491062   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.491230   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.491246   66641 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-911217 && echo "default-k8s-diff-port-911217" | sudo tee /etc/hostname
	I0723 15:21:00.603917   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-911217
	
	I0723 15:21:00.603953   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.606538   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.606898   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.606943   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.607069   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.607306   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607525   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607711   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.607920   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.608129   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.608147   66641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-911217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-911217/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-911217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:00.710852   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:00.710887   66641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:00.710915   66641 buildroot.go:174] setting up certificates
	I0723 15:21:00.710928   66641 provision.go:84] configureAuth start
	I0723 15:21:00.710939   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.711205   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.714141   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714519   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.714552   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714765   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.717395   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.717739   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717939   66641 provision.go:143] copyHostCerts
	I0723 15:21:00.718004   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:00.718020   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:00.718115   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:00.718237   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:00.718250   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:00.718284   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:00.718373   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:00.718401   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:00.718431   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:00.718522   66641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-911217 san=[127.0.0.1 192.168.61.64 default-k8s-diff-port-911217 localhost minikube]
	I0723 15:21:01.133831   66641 provision.go:177] copyRemoteCerts
	I0723 15:21:01.133894   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:01.133919   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.136913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.137359   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.137782   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.137944   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.138115   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.217531   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:01.241478   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0723 15:21:01.265056   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:21:01.287281   66641 provision.go:87] duration metric: took 576.341839ms to configureAuth
	I0723 15:21:01.287317   66641 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:01.287496   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:01.287579   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.290157   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290640   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.290668   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290775   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.290978   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.291509   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.291673   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.291688   66641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:01.540756   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:01.540783   66641 machine.go:97] duration metric: took 1.153625976s to provisionDockerMachine
	I0723 15:21:01.540796   66641 start.go:293] postStartSetup for "default-k8s-diff-port-911217" (driver="kvm2")
	I0723 15:21:01.540809   66641 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:01.540827   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.541189   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:01.541225   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.544068   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544486   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.544511   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544600   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.544788   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.544945   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.545154   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.625316   66641 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:01.629446   66641 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:01.629469   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:01.629529   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:01.629634   66641 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:01.629759   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:01.639896   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:01.663515   66641 start.go:296] duration metric: took 122.707128ms for postStartSetup
	I0723 15:21:01.663551   66641 fix.go:56] duration metric: took 19.276599962s for fixHost
	I0723 15:21:01.663569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.666406   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.666830   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.666861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.667086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.667290   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667487   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.667895   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.668100   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.668116   66641 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:01.767011   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748061.738020629
	
	I0723 15:21:01.767035   66641 fix.go:216] guest clock: 1721748061.738020629
	I0723 15:21:01.767043   66641 fix.go:229] Guest: 2024-07-23 15:21:01.738020629 +0000 UTC Remote: 2024-07-23 15:21:01.66355459 +0000 UTC m=+139.710056956 (delta=74.466039ms)
	I0723 15:21:01.767088   66641 fix.go:200] guest clock delta is within tolerance: 74.466039ms
	I0723 15:21:01.767097   66641 start.go:83] releasing machines lock for "default-k8s-diff-port-911217", held for 19.380180818s
	I0723 15:21:01.767122   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.767446   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:01.770143   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770575   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.770607   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770771   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771336   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771513   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771672   66641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:01.771722   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.771767   66641 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:01.771792   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.774913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775261   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775401   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775440   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775651   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.775783   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775835   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775851   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.775933   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.776044   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776119   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.776196   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.776293   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776455   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.887716   66641 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:01.894935   66641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:59.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.906133   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:02.040633   66641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:02.047908   66641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:02.047982   66641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:02.067565   66641 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:02.067589   66641 start.go:495] detecting cgroup driver to use...
	I0723 15:21:02.067648   66641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:02.083334   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:02.096435   66641 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:02.096501   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:02.109497   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:02.122475   66641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:02.238156   66641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:02.413213   66641 docker.go:233] disabling docker service ...
	I0723 15:21:02.413321   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:02.431076   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:02.443590   66641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:02.565848   66641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:02.708530   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:02.724781   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:02.744261   66641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:21:02.744317   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.755864   66641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:02.755939   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.768381   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.779157   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.789500   66641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:02.801063   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.812845   66641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.828742   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.840605   66641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:02.849796   66641 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:02.849866   66641 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:02.862982   66641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:02.874354   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:03.017881   66641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:03.157623   66641 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:03.157699   66641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:03.162343   66641 start.go:563] Will wait 60s for crictl version
	I0723 15:21:03.162429   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:21:03.166092   66641 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:03.203681   66641 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:03.203775   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.230722   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.257801   66641 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:21:01.793112   64842 main.go:141] libmachine: (no-preload-543029) Calling .Start
	I0723 15:21:01.793305   64842 main.go:141] libmachine: (no-preload-543029) Ensuring networks are active...
	I0723 15:21:01.794004   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network default is active
	I0723 15:21:01.794444   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network mk-no-preload-543029 is active
	I0723 15:21:01.794908   64842 main.go:141] libmachine: (no-preload-543029) Getting domain xml...
	I0723 15:21:01.795563   64842 main.go:141] libmachine: (no-preload-543029) Creating domain...
	I0723 15:21:03.126716   64842 main.go:141] libmachine: (no-preload-543029) Waiting to get IP...
	I0723 15:21:03.127667   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.128113   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.128193   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.128095   67435 retry.go:31] will retry after 265.57265ms: waiting for machine to come up
	I0723 15:21:03.395811   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.396355   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.396382   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.396301   67435 retry.go:31] will retry after 304.545362ms: waiting for machine to come up
	I0723 15:21:03.702841   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.703303   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.703332   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.703241   67435 retry.go:31] will retry after 326.35473ms: waiting for machine to come up
	I0723 15:21:04.032032   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.032670   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.032695   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.032568   67435 retry.go:31] will retry after 515.672537ms: waiting for machine to come up
	I0723 15:21:04.550461   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.550989   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.551019   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.550942   67435 retry.go:31] will retry after 735.237546ms: waiting for machine to come up
	I0723 15:21:05.287672   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.288362   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.288393   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.288259   67435 retry.go:31] will retry after 683.55844ms: waiting for machine to come up
	I0723 15:21:02.262289   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.763009   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.262852   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.763260   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.262964   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.762673   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.263335   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.762790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.262830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.762830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.259168   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:03.262241   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:03.262748   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262930   66641 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:03.266969   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:03.278873   66641 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:03.279019   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:21:03.279076   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.318295   66641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:21:03.318390   66641 ssh_runner.go:195] Run: which lz4
	I0723 15:21:03.322441   66641 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:21:03.326818   66641 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:21:03.326857   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:21:04.624581   66641 crio.go:462] duration metric: took 1.302205276s to copy over tarball
	I0723 15:21:04.624665   66641 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:21:06.913370   66641 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.288673981s)
	I0723 15:21:06.913403   66641 crio.go:469] duration metric: took 2.288793517s to extract the tarball
	I0723 15:21:06.913413   66641 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:21:06.951820   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.906766   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:06.405854   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:05.973409   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.973872   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.973920   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.973856   67435 retry.go:31] will retry after 728.120188ms: waiting for machine to come up
	I0723 15:21:06.703125   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:06.703631   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:06.703661   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:06.703554   67435 retry.go:31] will retry after 1.052851436s: waiting for machine to come up
	I0723 15:21:07.758261   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:07.758823   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:07.758853   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:07.758766   67435 retry.go:31] will retry after 1.533027844s: waiting for machine to come up
	I0723 15:21:09.293721   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:09.294204   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:09.294230   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:09.294169   67435 retry.go:31] will retry after 1.399702148s: waiting for machine to come up
	I0723 15:21:07.262935   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.762473   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.262990   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.262850   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.762245   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.263207   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.762516   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.263298   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.762853   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.993755   66641 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:21:06.993783   66641 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:21:06.993793   66641 kubeadm.go:934] updating node { 192.168.61.64 8444 v1.30.3 crio true true} ...
	I0723 15:21:06.993917   66641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-911217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:06.993994   66641 ssh_runner.go:195] Run: crio config
	I0723 15:21:07.040966   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:07.040991   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:07.041014   66641 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:07.041040   66641 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.64 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-911217 NodeName:default-k8s-diff-port-911217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:07.041222   66641 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.64
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-911217"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:07.041284   66641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:21:07.051498   66641 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:07.051567   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:07.060752   66641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0723 15:21:07.078362   66641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:21:07.093890   66641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0723 15:21:07.121632   66641 ssh_runner.go:195] Run: grep 192.168.61.64	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:07.126674   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:07.139521   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:07.264702   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:07.286475   66641 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217 for IP: 192.168.61.64
	I0723 15:21:07.286499   66641 certs.go:194] generating shared ca certs ...
	I0723 15:21:07.286521   66641 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:07.286750   66641 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:07.286814   66641 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:07.286829   66641 certs.go:256] generating profile certs ...
	I0723 15:21:07.286928   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/client.key
	I0723 15:21:07.286986   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key.a1750142
	I0723 15:21:07.287041   66641 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key
	I0723 15:21:07.287151   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:07.287182   66641 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:07.287191   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:07.287210   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:07.287233   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:07.287257   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:07.287288   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:07.288006   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:07.331680   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:07.378132   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:07.423720   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:07.462077   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 15:21:07.489608   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:21:07.511619   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:07.535480   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:07.557870   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:07.579317   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:07.601107   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:07.622717   66641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:07.638728   66641 ssh_runner.go:195] Run: openssl version
	I0723 15:21:07.644065   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:07.654161   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658261   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658335   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.663893   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:07.673883   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:07.684409   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688657   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688710   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.694037   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:07.704621   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:07.714866   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719090   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719133   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.724797   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:07.734660   66641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:07.739005   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:07.744615   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:07.749912   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:07.755350   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:07.760833   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:07.766701   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:07.773611   66641 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:07.773724   66641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:07.773788   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.812612   66641 cri.go:89] found id: ""
	I0723 15:21:07.812689   66641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:07.822628   66641 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:07.822648   66641 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:07.822699   66641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:07.831812   66641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:07.833459   66641 kubeconfig.go:125] found "default-k8s-diff-port-911217" server: "https://192.168.61.64:8444"
	I0723 15:21:07.836425   66641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:07.846945   66641 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.64
	I0723 15:21:07.846976   66641 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:07.846989   66641 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:07.847046   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.881091   66641 cri.go:89] found id: ""
	I0723 15:21:07.881180   66641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:07.900373   66641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:07.912010   66641 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:07.912035   66641 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:07.912092   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0723 15:21:07.920903   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:07.920981   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:07.930186   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0723 15:21:07.938825   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:07.938891   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:07.947852   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.957007   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:07.957076   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.966642   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0723 15:21:07.975395   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:07.975457   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:07.984363   66641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:07.993997   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:08.112135   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.260639   66641 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1484675s)
	I0723 15:21:09.260677   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.481542   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.546998   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.657302   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:09.657407   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.157632   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.658193   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.694922   66641 api_server.go:72] duration metric: took 1.037619978s to wait for apiserver process to appear ...
	I0723 15:21:10.694957   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:10.694980   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:08.406647   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:10.907117   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:13.783814   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.783855   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:13.783874   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:13.828920   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.828952   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:14.195191   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.199330   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.199350   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:14.695758   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.703433   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.703471   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:15.196096   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:15.200578   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:21:15.208499   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:21:15.208523   66641 api_server.go:131] duration metric: took 4.513559684s to wait for apiserver health ...
	I0723 15:21:15.208532   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:15.208539   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:15.210371   66641 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:10.696028   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:10.696532   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:10.696556   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:10.696480   67435 retry.go:31] will retry after 1.754927597s: waiting for machine to come up
	I0723 15:21:12.452705   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:12.453135   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:12.453164   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:12.453082   67435 retry.go:31] will retry after 2.354607493s: waiting for machine to come up
	I0723 15:21:14.809924   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:14.810438   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:14.810467   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:14.810400   67435 retry.go:31] will retry after 4.422072307s: waiting for machine to come up
	I0723 15:21:12.262754   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.762339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.262358   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.762291   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.262339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.762796   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.263008   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.762225   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.263100   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.762356   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.211787   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:15.226475   66641 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:15.245284   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:15.253756   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:15.253789   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:15.253798   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:15.253805   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:15.253815   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:15.253822   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:15.253828   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:15.253833   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:15.253838   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:15.253844   66641 system_pods.go:74] duration metric: took 8.537438ms to wait for pod list to return data ...
	I0723 15:21:15.253853   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:15.258127   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:15.258153   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:15.258163   66641 node_conditions.go:105] duration metric: took 4.305171ms to run NodePressure ...
	I0723 15:21:15.258177   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:15.533298   66641 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541967   66641 kubeadm.go:739] kubelet initialised
	I0723 15:21:15.541987   66641 kubeadm.go:740] duration metric: took 8.645977ms waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541995   66641 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:15.549557   66641 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.553971   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554002   66641 pod_ready.go:81] duration metric: took 4.418498ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.554013   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554022   66641 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.558017   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558040   66641 pod_ready.go:81] duration metric: took 4.009013ms for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.558050   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558058   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.562197   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562219   66641 pod_ready.go:81] duration metric: took 4.154836ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.562228   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562234   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.649441   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649466   66641 pod_ready.go:81] duration metric: took 87.224782ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.649477   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649484   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.049016   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049052   66641 pod_ready.go:81] duration metric: took 399.56194ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.049063   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049071   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.449193   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449221   66641 pod_ready.go:81] duration metric: took 400.140989ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.449231   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449239   66641 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.849035   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849069   66641 pod_ready.go:81] duration metric: took 399.822211ms for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.849080   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849087   66641 pod_ready.go:38] duration metric: took 1.307085242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:16.849102   66641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:16.860322   66641 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:16.860344   66641 kubeadm.go:597] duration metric: took 9.037689802s to restartPrimaryControlPlane
	I0723 15:21:16.860353   66641 kubeadm.go:394] duration metric: took 9.086749188s to StartCluster
	I0723 15:21:16.860368   66641 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.860445   66641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:16.862706   66641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.863010   66641 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:16.863105   66641 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:16.863162   66641 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863183   66641 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863194   66641 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863201   66641 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:16.863202   66641 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863218   66641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-911217"
	I0723 15:21:16.863225   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863235   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:16.863261   66641 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863272   66641 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:16.863304   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863517   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863547   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863553   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863566   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863584   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863612   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.864773   66641 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:16.866155   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:16.879697   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0723 15:21:16.880186   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.880765   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.880786   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.881122   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.881681   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.881712   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.882675   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0723 15:21:16.883162   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.883709   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.883730   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.883748   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0723 15:21:16.884082   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.884138   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.884609   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.884639   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.884610   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.884699   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.885040   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.885254   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.888611   66641 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.888627   66641 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:16.888651   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.888916   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.888944   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.899013   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0723 15:21:16.899458   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.900188   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.900208   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.900593   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.900786   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.902589   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0723 15:21:16.903091   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.903189   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.904095   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.904118   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.904576   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.904810   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.905242   66641 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:16.905443   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0723 15:21:16.905849   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.906358   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.906375   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.906491   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:16.906512   66641 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:16.906533   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.906766   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.906920   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.907374   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.907409   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.909637   66641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:16.910635   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911126   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.911154   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.911534   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.911683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.911859   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.913408   66641 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:16.913435   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:16.913456   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.916884   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.917338   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917647   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.917896   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.918061   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.918207   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.930880   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0723 15:21:16.931386   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.931925   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.931951   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.932292   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.932495   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.934404   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.934645   66641 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:16.934659   66641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:16.934675   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.937624   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.937991   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.938013   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.938166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.938342   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.938523   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.938695   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:13.407459   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:15.906352   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:17.068411   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:17.084266   66641 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:17.189089   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:17.189118   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:17.205584   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:17.205623   66641 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:17.209103   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:17.224264   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:17.245125   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:17.245152   66641 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:17.272564   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:18.245078   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020778604s)
	I0723 15:21:18.245165   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036025141s)
	I0723 15:21:18.245186   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245195   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245209   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245213   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245201   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245513   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245526   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245543   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245550   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245633   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245648   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245657   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245665   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245682   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245695   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245703   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245723   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245842   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245872   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245903   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245911   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245928   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245932   66641 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-911217"
	I0723 15:21:18.245982   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245987   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.246004   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.251643   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.251660   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.251879   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.251889   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.253737   66641 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:19.235665   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236110   64842 main.go:141] libmachine: (no-preload-543029) Found IP for machine: 192.168.72.227
	I0723 15:21:19.236141   64842 main.go:141] libmachine: (no-preload-543029) Reserving static IP address...
	I0723 15:21:19.236154   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has current primary IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236541   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.236571   64842 main.go:141] libmachine: (no-preload-543029) DBG | skip adding static IP to network mk-no-preload-543029 - found existing host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"}
	I0723 15:21:19.236586   64842 main.go:141] libmachine: (no-preload-543029) Reserved static IP address: 192.168.72.227
	I0723 15:21:19.236601   64842 main.go:141] libmachine: (no-preload-543029) Waiting for SSH to be available...
	I0723 15:21:19.236613   64842 main.go:141] libmachine: (no-preload-543029) DBG | Getting to WaitForSSH function...
	I0723 15:21:19.239149   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239453   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.239481   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239620   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH client type: external
	I0723 15:21:19.239651   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa (-rw-------)
	I0723 15:21:19.239677   64842 main.go:141] libmachine: (no-preload-543029) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:19.239691   64842 main.go:141] libmachine: (no-preload-543029) DBG | About to run SSH command:
	I0723 15:21:19.239700   64842 main.go:141] libmachine: (no-preload-543029) DBG | exit 0
	I0723 15:21:19.366227   64842 main.go:141] libmachine: (no-preload-543029) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:19.366646   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetConfigRaw
	I0723 15:21:19.367309   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.370038   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370401   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.370430   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370756   64842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/config.json ...
	I0723 15:21:19.370949   64842 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:19.370966   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:19.371186   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.373506   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.373912   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.373977   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.374053   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.374259   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374465   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374635   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.374805   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.374996   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.375009   64842 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:19.482523   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:19.482551   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482771   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:21:19.482796   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482975   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.485520   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.485868   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.485898   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.486084   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.486300   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486483   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486634   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.486777   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.486998   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.487019   64842 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-543029 && echo "no-preload-543029" | sudo tee /etc/hostname
	I0723 15:21:19.609064   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-543029
	
	I0723 15:21:19.609100   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.611746   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612087   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.612133   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612276   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.612477   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612663   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612845   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.612979   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.613158   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.613180   64842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-543029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-543029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-543029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:19.731696   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:19.731721   64842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:19.731740   64842 buildroot.go:174] setting up certificates
	I0723 15:21:19.731748   64842 provision.go:84] configureAuth start
	I0723 15:21:19.731755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.732051   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.735016   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.735425   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735608   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.737908   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738267   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.738317   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738482   64842 provision.go:143] copyHostCerts
	I0723 15:21:19.738556   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:19.738571   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:19.738641   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:19.738746   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:19.738755   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:19.738779   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:19.738852   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:19.738866   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:19.738887   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:19.738965   64842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.no-preload-543029 san=[127.0.0.1 192.168.72.227 localhost minikube no-preload-543029]
	I0723 15:21:20.020845   64842 provision.go:177] copyRemoteCerts
	I0723 15:21:20.020921   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:20.020954   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.023907   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024341   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.024363   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024531   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.024799   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.024973   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.025138   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.113238   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:20.136690   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 15:21:20.161178   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:21:20.184741   64842 provision.go:87] duration metric: took 452.982716ms to configureAuth
	I0723 15:21:20.184767   64842 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:20.184992   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:20.185076   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.187893   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188209   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.188235   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188473   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.188684   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.188883   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.189026   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.189181   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.189379   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.189397   64842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:17.263163   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.762332   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.263184   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.762413   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.263050   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.762396   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.263052   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.763027   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.263244   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.762584   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.255042   66641 addons.go:510] duration metric: took 1.391938603s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:19.089229   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:21.587960   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:20.463609   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:20.463657   64842 machine.go:97] duration metric: took 1.092694849s to provisionDockerMachine
	I0723 15:21:20.463670   64842 start.go:293] postStartSetup for "no-preload-543029" (driver="kvm2")
	I0723 15:21:20.463684   64842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:20.463705   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.464063   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:20.464093   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.467027   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.467429   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467606   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.467785   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.467938   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.468096   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.556442   64842 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:20.561477   64842 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:20.561506   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:20.561590   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:20.561694   64842 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:20.561814   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:20.574431   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:20.603531   64842 start.go:296] duration metric: took 139.847057ms for postStartSetup
	I0723 15:21:20.603578   64842 fix.go:56] duration metric: took 18.836315993s for fixHost
	I0723 15:21:20.603644   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.606820   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607184   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.607230   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607410   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.607660   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607851   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607999   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.608191   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.608373   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.608383   64842 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:20.718722   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748080.694505305
	
	I0723 15:21:20.718755   64842 fix.go:216] guest clock: 1721748080.694505305
	I0723 15:21:20.718764   64842 fix.go:229] Guest: 2024-07-23 15:21:20.694505305 +0000 UTC Remote: 2024-07-23 15:21:20.603582679 +0000 UTC m=+365.240688683 (delta=90.922626ms)
	I0723 15:21:20.718796   64842 fix.go:200] guest clock delta is within tolerance: 90.922626ms
	I0723 15:21:20.718801   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 18.9515773s
	I0723 15:21:20.718818   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.719088   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:20.721851   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722269   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.722292   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722527   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723046   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723231   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723328   64842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:20.723377   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.723460   64842 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:20.723485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.726596   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.726987   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727022   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727041   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727142   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727329   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.727475   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727498   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.727510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727638   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727707   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.728003   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.728170   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.728341   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.841462   64842 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:20.847787   64842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:21:20.998310   64842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:21.004048   64842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:21.004125   64842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:21.019676   64842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:21.019699   64842 start.go:495] detecting cgroup driver to use...
	I0723 15:21:21.019773   64842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:21.034888   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:21.049886   64842 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:21.049949   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:21.063974   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:21.077306   64842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:21.195936   64842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:21.355002   64842 docker.go:233] disabling docker service ...
	I0723 15:21:21.355090   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:21.370421   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:21.382910   64842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:21.493040   64842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:21.610670   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:21.623845   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:21.641461   64842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:21:21.641518   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.651025   64842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:21.651096   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.661449   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.671431   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.681681   64842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:21.692696   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.702592   64842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.720041   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.730075   64842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:21.739621   64842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:21.739686   64842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:21.752036   64842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:21.761412   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:21.902842   64842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:22.032458   64842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:22.032545   64842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:22.037229   64842 start.go:563] Will wait 60s for crictl version
	I0723 15:21:22.037309   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.040918   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:22.081102   64842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:22.081203   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.111862   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.140842   64842 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:21:18.404301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:20.406322   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.406365   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.142110   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:22.144996   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145342   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:22.145382   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145651   64842 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:22.149630   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:22.161308   64842 kubeadm.go:883] updating cluster {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:22.161457   64842 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:21:22.161507   64842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:22.196099   64842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0723 15:21:22.196122   64842 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:21:22.196180   64842 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.196197   64842 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.196257   64842 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0723 15:21:22.196270   64842 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.196280   64842 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.196391   64842 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.196430   64842 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.196256   64842 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.197600   64842 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197611   64842 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.197612   64842 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.197603   64842 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.197632   64842 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.197855   64842 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0723 15:21:22.453013   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.456128   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.457426   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.457660   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.468840   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.488855   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0723 15:21:22.498800   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.521182   64842 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0723 15:21:22.521236   64842 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.521282   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.606761   64842 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0723 15:21:22.606814   64842 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.606863   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626104   64842 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0723 15:21:22.626139   64842 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0723 15:21:22.626148   64842 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.626171   64842 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626405   64842 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0723 15:21:22.626436   64842 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.626497   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.739834   64842 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0723 15:21:22.739888   64842 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.739923   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.739972   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.739931   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.740025   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.740028   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.740087   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.754758   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.903466   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0723 15:21:22.903526   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903582   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.903618   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903475   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903669   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903725   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903738   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903808   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0723 15:21:22.903870   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:22.903977   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.904112   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.916856   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0723 15:21:22.916880   64842 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.916927   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.917993   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918778   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918818   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918846   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0723 15:21:22.918919   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0723 15:21:23.126109   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916361   64842 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.790200633s)
	I0723 15:21:24.916416   64842 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0723 15:21:24.916450   64842 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916477   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.999519999s)
	I0723 15:21:24.916501   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:24.916502   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0723 15:21:24.916528   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.916570   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.921489   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.262373   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.762746   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.263229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.763195   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.262446   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.762506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.262490   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.263073   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.762900   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.087763   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:24.588088   66641 node_ready.go:49] node "default-k8s-diff-port-911217" has status "Ready":"True"
	I0723 15:21:24.588115   66641 node_ready.go:38] duration metric: took 7.503814941s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:24.588126   66641 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:24.593658   66641 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598755   66641 pod_ready.go:92] pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:24.598780   66641 pod_ready.go:81] duration metric: took 5.095349ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598792   66641 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:26.605401   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:24.906330   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:26.906460   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:27.393601   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.477002958s)
	I0723 15:21:27.393621   64842 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.472105782s)
	I0723 15:21:27.393640   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0723 15:21:27.393664   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393665   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 15:21:27.393707   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393763   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:29.040178   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.646445558s)
	I0723 15:21:29.040216   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0723 15:21:29.040222   64842 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.64643284s)
	I0723 15:21:29.040248   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0723 15:21:29.040252   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:29.040316   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:27.262530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.762666   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.262506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.762908   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.262943   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.763041   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.263200   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.762855   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.262991   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.605685   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:29.107082   66641 pod_ready.go:92] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.107106   66641 pod_ready.go:81] duration metric: took 4.508306433s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.107117   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112506   66641 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.112529   66641 pod_ready.go:81] duration metric: took 5.405596ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112564   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117710   66641 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.117736   66641 pod_ready.go:81] duration metric: took 5.161856ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117748   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122182   66641 pod_ready.go:92] pod "kube-proxy-d4zwd" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.122207   66641 pod_ready.go:81] duration metric: took 4.450531ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122218   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126407   66641 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.126428   66641 pod_ready.go:81] duration metric: took 4.201792ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126439   66641 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:31.133392   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:28.967873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.404672   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.100302   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.059957757s)
	I0723 15:21:31.100343   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0723 15:21:31.100373   64842 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:31.100425   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:34.291526   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.191073801s)
	I0723 15:21:34.291561   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0723 15:21:34.291588   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:34.291639   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:32.262345   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.762530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.262472   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.763055   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.262344   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.762962   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.262594   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.762498   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.263210   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.763229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.631906   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.632672   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:33.405404   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.906326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.650341   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358679252s)
	I0723 15:21:35.650368   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0723 15:21:35.650412   64842 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:35.650450   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:36.307948   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0723 15:21:36.307992   64842 cache_images.go:123] Successfully loaded all cached images
	I0723 15:21:36.307999   64842 cache_images.go:92] duration metric: took 14.11186471s to LoadCachedImages
	I0723 15:21:36.308012   64842 kubeadm.go:934] updating node { 192.168.72.227 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:21:36.308139   64842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-543029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:36.308223   64842 ssh_runner.go:195] Run: crio config
	I0723 15:21:36.353489   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:36.353510   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:36.353521   64842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:36.353549   64842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-543029 NodeName:no-preload-543029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:36.353706   64842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-543029"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:36.353774   64842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:21:36.363814   64842 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:36.363887   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:36.372484   64842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0723 15:21:36.388450   64842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:21:36.404404   64842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0723 15:21:36.420801   64842 ssh_runner.go:195] Run: grep 192.168.72.227	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:36.424596   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:36.436558   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:36.563903   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:36.580045   64842 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029 for IP: 192.168.72.227
	I0723 15:21:36.580108   64842 certs.go:194] generating shared ca certs ...
	I0723 15:21:36.580133   64842 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:36.580339   64842 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:36.580409   64842 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:36.580423   64842 certs.go:256] generating profile certs ...
	I0723 15:21:36.580538   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.key
	I0723 15:21:36.580633   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key.1fcf66d2
	I0723 15:21:36.580678   64842 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key
	I0723 15:21:36.580818   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:36.580856   64842 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:36.580866   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:36.580899   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:36.580934   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:36.580968   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:36.581017   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:36.581890   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:36.617903   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:36.650101   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:36.690040   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:36.716216   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 15:21:36.740583   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:21:36.764801   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:36.798418   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:36.821594   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:36.843862   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:36.866577   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:36.888178   64842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:36.903980   64842 ssh_runner.go:195] Run: openssl version
	I0723 15:21:36.910344   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:36.920792   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925317   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925372   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.931375   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:36.941782   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:36.952943   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957594   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957643   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.963465   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:36.974471   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:36.984631   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989126   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989180   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.994580   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:37.004372   64842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:37.009492   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:37.016189   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:37.023648   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:37.030369   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:37.036358   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:37.042504   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:37.048396   64842 kubeadm.go:392] StartCluster: {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:37.048473   64842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:37.048542   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.085642   64842 cri.go:89] found id: ""
	I0723 15:21:37.085711   64842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:37.095789   64842 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:37.095809   64842 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:37.095861   64842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:37.105817   64842 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:37.106841   64842 kubeconfig.go:125] found "no-preload-543029" server: "https://192.168.72.227:8443"
	I0723 15:21:37.109115   64842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:37.118333   64842 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.227
	I0723 15:21:37.118365   64842 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:37.118389   64842 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:37.118442   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.160393   64842 cri.go:89] found id: ""
	I0723 15:21:37.160465   64842 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:37.175866   64842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:37.184719   64842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:37.184737   64842 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:37.184796   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:21:37.192836   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:37.192893   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:37.201472   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:21:37.209448   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:37.209509   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:37.217692   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.225746   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:37.225792   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.234312   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:21:37.242796   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:37.242853   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:37.251655   64842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:37.260393   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:37.372906   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.228191   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.438949   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.503088   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.588692   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:38.588787   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.089205   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.589266   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.609653   64842 api_server.go:72] duration metric: took 1.020961559s to wait for apiserver process to appear ...
	I0723 15:21:39.609681   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:39.609703   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:39.610233   64842 api_server.go:269] stopped: https://192.168.72.227:8443/healthz: Get "https://192.168.72.227:8443/healthz": dial tcp 192.168.72.227:8443: connect: connection refused
	I0723 15:21:40.110036   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:37.263268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.763001   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.263263   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.762567   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.262510   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.762366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.263091   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.762546   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.263115   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.762511   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.133459   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.634011   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:38.405042   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.405301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.406499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.755036   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.755081   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:42.755102   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:42.774722   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.774753   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:43.110105   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.114521   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.114549   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:43.610681   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.619976   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.620012   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:44.110574   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:44.117164   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:21:44.125459   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:21:44.125487   64842 api_server.go:131] duration metric: took 4.515798224s to wait for apiserver health ...
	I0723 15:21:44.125500   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:44.125508   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:44.127031   64842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:44.128250   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:44.156441   64842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:44.190002   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:44.202487   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:44.202543   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:44.202558   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:44.202570   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:44.202580   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:44.202597   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:44.202611   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:44.202623   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:44.202635   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:44.202649   64842 system_pods.go:74] duration metric: took 12.618106ms to wait for pod list to return data ...
	I0723 15:21:44.202663   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:44.208561   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:44.208598   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:44.208613   64842 node_conditions.go:105] duration metric: took 5.939597ms to run NodePressure ...
	I0723 15:21:44.208637   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:44.527115   64842 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531381   64842 kubeadm.go:739] kubelet initialised
	I0723 15:21:44.531403   64842 kubeadm.go:740] duration metric: took 4.261609ms waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531410   64842 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:44.536741   64842 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.542345   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542367   64842 pod_ready.go:81] duration metric: took 5.603228ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.542376   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542409   64842 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.547170   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547202   64842 pod_ready.go:81] duration metric: took 4.783034ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.547214   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547223   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.552220   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552239   64842 pod_ready.go:81] duration metric: took 5.010275ms for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.552247   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552252   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.593233   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593263   64842 pod_ready.go:81] duration metric: took 41.002989ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.593275   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593284   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.993527   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993556   64842 pod_ready.go:81] duration metric: took 400.24962ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.993567   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993575   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.393187   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393215   64842 pod_ready.go:81] duration metric: took 399.632229ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.393224   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393230   64842 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.794005   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794039   64842 pod_ready.go:81] duration metric: took 400.798877ms for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.794050   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794061   64842 pod_ready.go:38] duration metric: took 1.262643249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:45.794082   64842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:45.806575   64842 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:45.806604   64842 kubeadm.go:597] duration metric: took 8.710787698s to restartPrimaryControlPlane
	I0723 15:21:45.806616   64842 kubeadm.go:394] duration metric: took 8.758224212s to StartCluster
	I0723 15:21:45.806636   64842 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.806714   64842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:45.808707   64842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.808950   64842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:45.809024   64842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:45.809108   64842 addons.go:69] Setting storage-provisioner=true in profile "no-preload-543029"
	I0723 15:21:45.809121   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:45.809144   64842 addons.go:234] Setting addon storage-provisioner=true in "no-preload-543029"
	I0723 15:21:45.809148   64842 addons.go:69] Setting default-storageclass=true in profile "no-preload-543029"
	I0723 15:21:45.809158   64842 addons.go:69] Setting metrics-server=true in profile "no-preload-543029"
	I0723 15:21:45.809186   64842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-543029"
	I0723 15:21:45.809198   64842 addons.go:234] Setting addon metrics-server=true in "no-preload-543029"
	W0723 15:21:45.809207   64842 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:45.809233   64842 host.go:66] Checking if "no-preload-543029" exists ...
	W0723 15:21:45.809156   64842 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:45.809298   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.809533   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809566   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809615   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809650   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809666   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809694   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.810889   64842 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:45.812166   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:45.825877   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0723 15:21:45.826459   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.826873   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0723 15:21:45.827091   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827122   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.827302   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.827520   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.827785   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827809   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.828045   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.828076   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.828197   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.828404   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.828464   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0723 15:21:45.829160   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.829594   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.829617   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.830024   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.830679   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.830726   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.832633   64842 addons.go:234] Setting addon default-storageclass=true in "no-preload-543029"
	W0723 15:21:45.832654   64842 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:45.832683   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.833024   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.833067   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.848944   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0723 15:21:45.849974   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.850455   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0723 15:21:45.850916   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.850938   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.851135   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.851254   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.851443   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.852354   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.852373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.852472   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0723 15:21:45.852797   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.853534   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.853613   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.853820   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.854337   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.854373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.854866   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.855572   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.855606   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.855642   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.855829   64842 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:45.857645   64842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:45.857658   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:45.857676   64842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:45.857695   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:42.262868   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.762469   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.262898   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.762342   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.262359   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.763149   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.263062   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.763109   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.262592   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.763170   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.132245   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.633648   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.859112   64842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:45.859130   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:45.859146   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.861510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862069   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.862099   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862362   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.862596   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.862842   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.863077   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.863162   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.864192   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.864223   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.864257   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.864446   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.864602   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.864750   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.901172   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0723 15:21:45.901604   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.902073   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.902096   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.902455   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.902711   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.904749   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.905713   64842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:45.905736   64842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:45.905755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.909130   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909598   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.909655   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909882   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.910025   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.910171   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.910413   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:46.014049   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:46.040760   64842 node_ready.go:35] waiting up to 6m0s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:46.115180   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:46.144610   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:46.144632   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:46.164354   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:46.181905   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:46.181929   64842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:46.241734   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:46.241764   64842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:46.267086   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:47.396441   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.281225615s)
	I0723 15:21:47.396460   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232072139s)
	I0723 15:21:47.396498   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396512   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396529   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396544   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129426841s)
	I0723 15:21:47.396591   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396611   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396879   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396894   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396904   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396912   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396927   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396948   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396958   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396973   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.397067   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397093   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397113   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397120   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397310   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397326   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397335   64842 addons.go:475] Verifying addon metrics-server=true in "no-preload-543029"
	I0723 15:21:47.398473   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398488   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.398497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.398504   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.398766   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398788   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.398805   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.420728   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.420747   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.421047   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.421067   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.423038   64842 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:44.409201   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:46.905099   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:47.424285   64842 addons.go:510] duration metric: took 1.615264126s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:48.044800   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:47.262743   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.762500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.262636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.762397   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.262912   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.763274   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.262631   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.762560   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.262984   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.763131   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:51.763218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:51.804139   65605 cri.go:89] found id: ""
	I0723 15:21:51.804167   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.804177   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:51.804185   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:51.804246   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:51.846025   65605 cri.go:89] found id: ""
	I0723 15:21:51.846052   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.846064   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:51.846070   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:51.846133   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:48.132371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.133097   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:49.405318   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.907543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.545198   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:53.045065   64842 node_ready.go:49] node "no-preload-543029" has status "Ready":"True"
	I0723 15:21:53.045092   64842 node_ready.go:38] duration metric: took 7.004300565s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:53.045103   64842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:53.051631   64842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056333   64842 pod_ready.go:92] pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.056391   64842 pod_ready.go:81] duration metric: took 4.723453ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056428   64842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061634   64842 pod_ready.go:92] pod "etcd-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.061654   64842 pod_ready.go:81] duration metric: took 5.217288ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061666   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:55.068882   64842 pod_ready.go:102] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.885398   65605 cri.go:89] found id: ""
	I0723 15:21:51.885431   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.885442   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:51.885450   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:51.885514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:51.919587   65605 cri.go:89] found id: ""
	I0723 15:21:51.919618   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.919630   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:51.919637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:51.919723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:51.955301   65605 cri.go:89] found id: ""
	I0723 15:21:51.955335   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.955342   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:51.955348   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:51.955397   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:51.988318   65605 cri.go:89] found id: ""
	I0723 15:21:51.988345   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.988355   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:51.988362   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:51.988419   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:52.023375   65605 cri.go:89] found id: ""
	I0723 15:21:52.023407   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.023418   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:52.023426   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:52.023498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:52.060183   65605 cri.go:89] found id: ""
	I0723 15:21:52.060205   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.060212   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:52.060221   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:52.060233   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:52.109904   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:52.109937   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:52.123292   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:52.123317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:52.253361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:52.253386   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:52.253401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:52.321684   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:52.321720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:54.859846   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:54.873167   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:54.873233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:54.909330   65605 cri.go:89] found id: ""
	I0723 15:21:54.909351   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.909359   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:54.909364   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:54.909412   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:54.943092   65605 cri.go:89] found id: ""
	I0723 15:21:54.943120   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.943131   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:54.943138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:54.943198   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:54.975051   65605 cri.go:89] found id: ""
	I0723 15:21:54.975080   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.975090   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:54.975098   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:54.975172   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:55.017552   65605 cri.go:89] found id: ""
	I0723 15:21:55.017580   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.017590   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:55.017596   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:55.017657   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:55.067857   65605 cri.go:89] found id: ""
	I0723 15:21:55.067887   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.067897   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:55.067903   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:55.067965   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:55.105194   65605 cri.go:89] found id: ""
	I0723 15:21:55.105224   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.105234   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:55.105242   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:55.105312   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:55.174421   65605 cri.go:89] found id: ""
	I0723 15:21:55.174451   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.174463   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:55.174470   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:55.174521   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:55.209007   65605 cri.go:89] found id: ""
	I0723 15:21:55.209032   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.209039   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:55.209048   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:55.209059   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:55.261075   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:55.261110   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:55.273629   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:55.273656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:55.348214   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:55.348237   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:55.348271   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:55.418341   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:55.418371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:52.134201   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.633089   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.405215   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.405377   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.068263   64842 pod_ready.go:92] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.068285   64842 pod_ready.go:81] duration metric: took 3.006610636s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.068294   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073245   64842 pod_ready.go:92] pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.073267   64842 pod_ready.go:81] duration metric: took 4.962522ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073275   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078816   64842 pod_ready.go:92] pod "kube-proxy-wzbps" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.078835   64842 pod_ready.go:81] duration metric: took 5.554703ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078843   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646678   64842 pod_ready.go:92] pod "kube-scheduler-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.646709   64842 pod_ready.go:81] duration metric: took 567.858812ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646722   64842 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:58.653962   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:57.956565   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:57.969980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:57.970054   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:58.002894   65605 cri.go:89] found id: ""
	I0723 15:21:58.002925   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.002943   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:58.002951   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:58.003018   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:58.034980   65605 cri.go:89] found id: ""
	I0723 15:21:58.035007   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.035017   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:58.035024   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:58.035090   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:58.068666   65605 cri.go:89] found id: ""
	I0723 15:21:58.068694   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.068702   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:58.068708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:58.068757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:58.102693   65605 cri.go:89] found id: ""
	I0723 15:21:58.102727   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.102737   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:58.102744   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:58.102807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:58.137492   65605 cri.go:89] found id: ""
	I0723 15:21:58.137521   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.137530   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:58.137535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:58.137590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:58.173616   65605 cri.go:89] found id: ""
	I0723 15:21:58.173640   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.173647   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:58.173654   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:58.173716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:58.206995   65605 cri.go:89] found id: ""
	I0723 15:21:58.207023   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.207033   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:58.207040   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:58.207100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:58.238476   65605 cri.go:89] found id: ""
	I0723 15:21:58.238504   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.238513   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:58.238525   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:58.238538   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:58.291074   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:58.291104   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:58.305305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:58.305349   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:58.379551   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:58.379572   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:58.379587   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:58.453253   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:58.453293   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:00.994715   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:01.010264   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:01.010359   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:01.065402   65605 cri.go:89] found id: ""
	I0723 15:22:01.065433   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.065443   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:01.065451   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:01.065511   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:01.115626   65605 cri.go:89] found id: ""
	I0723 15:22:01.115655   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.115666   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:01.115675   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:01.115737   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:01.155568   65605 cri.go:89] found id: ""
	I0723 15:22:01.155595   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.155604   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:01.155610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:01.155674   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:01.191076   65605 cri.go:89] found id: ""
	I0723 15:22:01.191102   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.191110   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:01.191116   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:01.191162   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:01.224233   65605 cri.go:89] found id: ""
	I0723 15:22:01.224257   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.224263   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:01.224269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:01.224337   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:01.257321   65605 cri.go:89] found id: ""
	I0723 15:22:01.257344   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.257351   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:01.257357   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:01.257415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:01.289646   65605 cri.go:89] found id: ""
	I0723 15:22:01.289670   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.289678   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:01.289685   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:01.289740   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:01.322672   65605 cri.go:89] found id: ""
	I0723 15:22:01.322703   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.322714   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:01.322725   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:01.322741   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:01.395637   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:01.395674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:01.434548   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:01.434580   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:01.484364   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:01.484396   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:01.497536   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:01.497571   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:01.567570   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:57.132119   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:59.132178   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.134156   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:58.407847   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:00.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.161116   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.658640   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:04.068561   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:04.082660   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:04.082738   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:04.118536   65605 cri.go:89] found id: ""
	I0723 15:22:04.118566   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.118576   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:04.118584   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:04.118642   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:04.154768   65605 cri.go:89] found id: ""
	I0723 15:22:04.154792   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.154802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:04.154809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:04.154854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:04.188426   65605 cri.go:89] found id: ""
	I0723 15:22:04.188456   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.188464   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:04.188469   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:04.188517   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:04.222195   65605 cri.go:89] found id: ""
	I0723 15:22:04.222221   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.222229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:04.222251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:04.222327   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:04.259164   65605 cri.go:89] found id: ""
	I0723 15:22:04.259191   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.259201   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:04.259208   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:04.259275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:04.291500   65605 cri.go:89] found id: ""
	I0723 15:22:04.291527   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.291534   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:04.291541   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:04.291595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:04.326680   65605 cri.go:89] found id: ""
	I0723 15:22:04.326712   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.326722   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:04.326729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:04.326789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:04.358629   65605 cri.go:89] found id: ""
	I0723 15:22:04.358653   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.358662   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:04.358671   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:04.358682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:04.429591   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.429614   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:04.429625   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:04.509841   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:04.509887   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:04.547827   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:04.547852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:04.600857   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:04.600891   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:03.633501   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.633691   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.404413   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.404840   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.405499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:06.153755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:08.653890   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.116541   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:07.129739   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:07.129809   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:07.164541   65605 cri.go:89] found id: ""
	I0723 15:22:07.164573   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.164583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:07.164589   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:07.164651   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:07.202567   65605 cri.go:89] found id: ""
	I0723 15:22:07.202595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.202606   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:07.202613   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:07.202672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:07.238665   65605 cri.go:89] found id: ""
	I0723 15:22:07.238689   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.238698   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:07.238706   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:07.238763   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:07.271216   65605 cri.go:89] found id: ""
	I0723 15:22:07.271246   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.271256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:07.271263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:07.271335   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:07.303566   65605 cri.go:89] found id: ""
	I0723 15:22:07.303595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.303606   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:07.303613   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:07.303672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:07.337927   65605 cri.go:89] found id: ""
	I0723 15:22:07.337951   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.337959   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:07.337965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:07.338023   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:07.373813   65605 cri.go:89] found id: ""
	I0723 15:22:07.373841   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.373852   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:07.373860   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:07.373928   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:07.408301   65605 cri.go:89] found id: ""
	I0723 15:22:07.408326   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.408333   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:07.408340   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:07.408350   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:07.488384   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:07.488417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.531867   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:07.531895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:07.582639   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:07.582671   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.597387   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:07.597413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:07.673185   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.173915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:10.186657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:10.186717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:10.218213   65605 cri.go:89] found id: ""
	I0723 15:22:10.218238   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.218246   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:10.218252   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:10.218302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:10.250199   65605 cri.go:89] found id: ""
	I0723 15:22:10.250228   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.250238   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:10.250245   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:10.250307   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:10.282920   65605 cri.go:89] found id: ""
	I0723 15:22:10.282947   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.282957   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:10.282965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:10.283022   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:10.317334   65605 cri.go:89] found id: ""
	I0723 15:22:10.317363   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.317372   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:10.317380   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:10.317443   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:10.350520   65605 cri.go:89] found id: ""
	I0723 15:22:10.350548   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.350559   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:10.350566   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:10.350630   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:10.381360   65605 cri.go:89] found id: ""
	I0723 15:22:10.381385   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.381392   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:10.381405   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:10.381451   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:10.413202   65605 cri.go:89] found id: ""
	I0723 15:22:10.413231   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.413239   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:10.413244   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:10.413300   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:10.447102   65605 cri.go:89] found id: ""
	I0723 15:22:10.447132   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.447143   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:10.447154   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:10.447168   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:10.496110   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:10.496141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:10.509298   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:10.509331   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:10.578938   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.578960   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:10.578975   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:10.660316   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:10.660346   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.634852   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.635205   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.905326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.906212   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.153941   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.652564   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.199119   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:13.212070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:13.212129   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:13.247646   65605 cri.go:89] found id: ""
	I0723 15:22:13.247683   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.247694   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:13.247701   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:13.247759   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:13.277875   65605 cri.go:89] found id: ""
	I0723 15:22:13.277901   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.277909   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:13.277918   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:13.277973   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:13.311499   65605 cri.go:89] found id: ""
	I0723 15:22:13.311520   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.311527   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:13.311533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:13.311587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:13.342913   65605 cri.go:89] found id: ""
	I0723 15:22:13.342944   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.342955   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:13.342963   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:13.343020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:13.380062   65605 cri.go:89] found id: ""
	I0723 15:22:13.380085   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.380092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:13.380097   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:13.380148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:13.416683   65605 cri.go:89] found id: ""
	I0723 15:22:13.416712   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.416721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:13.416728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:13.416786   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:13.451783   65605 cri.go:89] found id: ""
	I0723 15:22:13.451806   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.451813   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:13.451819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:13.451864   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:13.490456   65605 cri.go:89] found id: ""
	I0723 15:22:13.490488   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.490500   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:13.490512   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:13.490531   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:13.562391   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:13.562419   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:13.562435   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:13.639271   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:13.639330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.677457   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:13.677486   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:13.727877   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:13.727912   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.242569   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:16.255165   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:16.255237   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:16.286884   65605 cri.go:89] found id: ""
	I0723 15:22:16.286973   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.286990   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:16.286998   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:16.287070   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:16.319480   65605 cri.go:89] found id: ""
	I0723 15:22:16.319508   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.319518   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:16.319524   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:16.319590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:16.356142   65605 cri.go:89] found id: ""
	I0723 15:22:16.356176   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.356186   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:16.356193   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:16.356251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:16.393720   65605 cri.go:89] found id: ""
	I0723 15:22:16.393748   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.393756   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:16.393761   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:16.393817   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:16.429752   65605 cri.go:89] found id: ""
	I0723 15:22:16.429788   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.429800   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:16.429807   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:16.429865   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:16.463983   65605 cri.go:89] found id: ""
	I0723 15:22:16.464012   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.464023   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:16.464030   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:16.464099   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:16.497390   65605 cri.go:89] found id: ""
	I0723 15:22:16.497417   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.497428   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:16.497435   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:16.497496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:16.532460   65605 cri.go:89] found id: ""
	I0723 15:22:16.532491   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.532502   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:16.532513   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:16.532525   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:16.584455   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:16.584492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.599205   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:16.599237   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:16.672183   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:16.672207   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:16.672221   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:16.748888   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:16.748923   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:12.132681   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.134314   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.634068   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.404961   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.406911   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:15.652813   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:17.653585   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.654123   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.286407   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:19.300815   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:19.300890   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:19.341088   65605 cri.go:89] found id: ""
	I0723 15:22:19.341122   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.341133   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:19.341140   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:19.341191   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:19.375597   65605 cri.go:89] found id: ""
	I0723 15:22:19.375627   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.375635   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:19.375641   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:19.375689   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:19.412206   65605 cri.go:89] found id: ""
	I0723 15:22:19.412234   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.412244   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:19.412252   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:19.412315   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:19.445598   65605 cri.go:89] found id: ""
	I0723 15:22:19.445631   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.445645   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:19.445653   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:19.445725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:19.477766   65605 cri.go:89] found id: ""
	I0723 15:22:19.477800   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.477811   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:19.477818   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:19.477877   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:19.509935   65605 cri.go:89] found id: ""
	I0723 15:22:19.509965   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.509976   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:19.509982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:19.510039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:19.542906   65605 cri.go:89] found id: ""
	I0723 15:22:19.542936   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.542947   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:19.542954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:19.543010   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:19.575935   65605 cri.go:89] found id: ""
	I0723 15:22:19.575964   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.575975   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:19.576036   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:19.576054   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:19.625640   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:19.625674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:19.638938   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:19.638965   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:19.711019   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:19.711047   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:19.711061   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:19.787744   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:19.787781   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:19.133215   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.632570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:18.905104   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.404733   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.152487   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:24.154220   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.326500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:22.339677   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:22.339741   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:22.374593   65605 cri.go:89] found id: ""
	I0723 15:22:22.374630   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.374641   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:22.374649   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:22.374713   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:22.408064   65605 cri.go:89] found id: ""
	I0723 15:22:22.408089   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.408099   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:22.408106   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:22.408166   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:22.442923   65605 cri.go:89] found id: ""
	I0723 15:22:22.442956   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.442968   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:22.442976   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:22.443038   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:22.476003   65605 cri.go:89] found id: ""
	I0723 15:22:22.476027   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.476036   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:22.476043   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:22.476109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:22.508221   65605 cri.go:89] found id: ""
	I0723 15:22:22.508253   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.508260   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:22.508268   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:22.508328   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:22.540748   65605 cri.go:89] found id: ""
	I0723 15:22:22.540778   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.540789   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:22.540797   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:22.540857   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:22.576000   65605 cri.go:89] found id: ""
	I0723 15:22:22.576028   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.576038   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:22.576044   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:22.576102   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:22.614295   65605 cri.go:89] found id: ""
	I0723 15:22:22.614325   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.614335   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:22.614346   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:22.614361   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:22.627447   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:22.627481   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:22.701142   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:22.701172   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:22.701188   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:22.788487   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:22.788523   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.831107   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:22.831136   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.382886   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:25.396072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:25.396147   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:25.432414   65605 cri.go:89] found id: ""
	I0723 15:22:25.432443   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.432454   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:25.432482   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:25.432554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:25.466375   65605 cri.go:89] found id: ""
	I0723 15:22:25.466421   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.466429   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:25.466434   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:25.466488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:25.502512   65605 cri.go:89] found id: ""
	I0723 15:22:25.502536   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.502545   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:25.502553   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:25.502624   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:25.535953   65605 cri.go:89] found id: ""
	I0723 15:22:25.535975   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.535984   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:25.535991   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:25.536051   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:25.569217   65605 cri.go:89] found id: ""
	I0723 15:22:25.569250   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.569261   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:25.569269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:25.569331   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:25.602317   65605 cri.go:89] found id: ""
	I0723 15:22:25.602341   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.602350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:25.602360   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:25.602433   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:25.636959   65605 cri.go:89] found id: ""
	I0723 15:22:25.636984   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.636994   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:25.637001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:25.637059   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:25.671719   65605 cri.go:89] found id: ""
	I0723 15:22:25.671753   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.671764   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:25.671775   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:25.671789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.720509   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:25.720540   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:25.733097   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:25.733121   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:25.809365   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:25.809393   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:25.809409   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:25.890663   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:25.890700   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:23.634537   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.133073   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:23.905075   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:25.905102   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:27.905390   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.653893   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.660981   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.430884   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:28.444825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:28.444882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:28.477510   65605 cri.go:89] found id: ""
	I0723 15:22:28.477533   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.477540   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:28.477546   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:28.477611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:28.515395   65605 cri.go:89] found id: ""
	I0723 15:22:28.515424   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.515434   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:28.515440   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:28.515498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:28.554144   65605 cri.go:89] found id: ""
	I0723 15:22:28.554169   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.554176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:28.554185   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:28.554239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:28.588756   65605 cri.go:89] found id: ""
	I0723 15:22:28.588783   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.588794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:28.588801   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:28.588861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:28.623278   65605 cri.go:89] found id: ""
	I0723 15:22:28.623305   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.623313   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:28.623318   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:28.623372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:28.666802   65605 cri.go:89] found id: ""
	I0723 15:22:28.666831   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.666840   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:28.666847   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:28.666906   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:28.697712   65605 cri.go:89] found id: ""
	I0723 15:22:28.697736   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.697744   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:28.697749   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:28.697803   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:28.730296   65605 cri.go:89] found id: ""
	I0723 15:22:28.730333   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.730340   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:28.730349   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:28.730360   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.779381   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:28.779417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:28.792687   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:28.792718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:28.859483   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:28.859508   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:28.859537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:28.933792   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:28.933824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.474653   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:31.488537   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:31.488602   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:31.522785   65605 cri.go:89] found id: ""
	I0723 15:22:31.522816   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.522826   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:31.522834   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:31.522901   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:31.554448   65605 cri.go:89] found id: ""
	I0723 15:22:31.554493   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.554503   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:31.554508   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:31.554568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:31.587456   65605 cri.go:89] found id: ""
	I0723 15:22:31.587479   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.587486   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:31.587492   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:31.587549   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:31.625604   65605 cri.go:89] found id: ""
	I0723 15:22:31.625632   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.625640   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:31.625646   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:31.625696   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:31.661266   65605 cri.go:89] found id: ""
	I0723 15:22:31.661298   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.661304   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:31.661309   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:31.661364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:31.696942   65605 cri.go:89] found id: ""
	I0723 15:22:31.696974   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.696984   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:31.696992   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:31.697055   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:31.730706   65605 cri.go:89] found id: ""
	I0723 15:22:31.730730   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.730738   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:31.730743   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:31.730789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:31.762778   65605 cri.go:89] found id: ""
	I0723 15:22:31.762802   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.762810   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:31.762818   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:31.762829   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.804789   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:31.804814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:30.133732   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:29.906482   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:32.404579   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.152594   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:33.154059   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.854481   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:31.854514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:31.867003   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:31.867028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:31.942544   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:31.942565   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:31.942576   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.519437   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:34.531879   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:34.531941   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:34.565547   65605 cri.go:89] found id: ""
	I0723 15:22:34.565572   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.565580   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:34.565585   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:34.565634   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:34.597865   65605 cri.go:89] found id: ""
	I0723 15:22:34.597892   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.597902   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:34.597908   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:34.597968   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:34.633153   65605 cri.go:89] found id: ""
	I0723 15:22:34.633176   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.633185   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:34.633192   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:34.633251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:34.668464   65605 cri.go:89] found id: ""
	I0723 15:22:34.668486   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.668496   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:34.668502   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:34.668573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:34.700358   65605 cri.go:89] found id: ""
	I0723 15:22:34.700401   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.700412   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:34.700422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:34.700495   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:34.731774   65605 cri.go:89] found id: ""
	I0723 15:22:34.731807   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.731819   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:34.731828   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:34.731902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:34.764204   65605 cri.go:89] found id: ""
	I0723 15:22:34.764232   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.764243   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:34.764251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:34.764311   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:34.794103   65605 cri.go:89] found id: ""
	I0723 15:22:34.794131   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.794139   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:34.794149   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:34.794165   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:34.868038   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:34.868063   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:34.868076   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.958254   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:34.958291   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:35.004649   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:35.004681   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:35.055496   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:35.055537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:32.632017   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.634515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.405341   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:36.905094   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:35.652935   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.654130   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:40.153533   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.569938   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:37.582561   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:37.582629   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:37.613053   65605 cri.go:89] found id: ""
	I0723 15:22:37.613081   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.613090   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:37.613096   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:37.613161   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:37.649239   65605 cri.go:89] found id: ""
	I0723 15:22:37.649270   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.649279   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:37.649286   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:37.649372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:37.685110   65605 cri.go:89] found id: ""
	I0723 15:22:37.685137   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.685145   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:37.685150   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:37.685201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:37.718210   65605 cri.go:89] found id: ""
	I0723 15:22:37.718231   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.718239   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:37.718245   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:37.718297   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:37.751192   65605 cri.go:89] found id: ""
	I0723 15:22:37.751224   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.751234   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:37.751241   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:37.751294   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:37.781569   65605 cri.go:89] found id: ""
	I0723 15:22:37.781597   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.781607   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:37.781614   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:37.781680   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:37.812886   65605 cri.go:89] found id: ""
	I0723 15:22:37.812916   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.812927   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:37.812934   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:37.812994   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:37.844065   65605 cri.go:89] found id: ""
	I0723 15:22:37.844094   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.844104   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:37.844114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:37.844128   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.857216   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:37.857244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:37.926781   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:37.926807   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:37.926824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:38.007510   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:38.007544   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:38.045404   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:38.045437   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:40.594590   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:40.607099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:40.607157   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:40.660888   65605 cri.go:89] found id: ""
	I0723 15:22:40.660915   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.660926   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:40.660933   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:40.660992   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:40.698276   65605 cri.go:89] found id: ""
	I0723 15:22:40.698302   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.698310   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:40.698317   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:40.698411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:40.733515   65605 cri.go:89] found id: ""
	I0723 15:22:40.733542   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.733552   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:40.733560   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:40.733619   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:40.765501   65605 cri.go:89] found id: ""
	I0723 15:22:40.765530   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.765541   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:40.765548   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:40.765600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:40.800660   65605 cri.go:89] found id: ""
	I0723 15:22:40.800686   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.800693   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:40.800698   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:40.800744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:40.836084   65605 cri.go:89] found id: ""
	I0723 15:22:40.836111   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.836119   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:40.836125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:40.836179   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:40.872567   65605 cri.go:89] found id: ""
	I0723 15:22:40.872593   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.872601   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:40.872607   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:40.872665   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:40.907965   65605 cri.go:89] found id: ""
	I0723 15:22:40.907995   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.908006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:40.908017   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:40.908032   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:40.977078   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:40.977105   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:40.977124   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:41.059589   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:41.059634   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:41.097934   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:41.097968   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:41.151322   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:41.151365   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.133207   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.133345   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.633631   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.407087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.904675   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:42.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:44.653650   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.665956   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:43.678808   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:43.678882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:43.711311   65605 cri.go:89] found id: ""
	I0723 15:22:43.711346   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.711356   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:43.711363   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:43.711415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:43.745203   65605 cri.go:89] found id: ""
	I0723 15:22:43.745226   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.745233   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:43.745239   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:43.745303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:43.778815   65605 cri.go:89] found id: ""
	I0723 15:22:43.778851   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.778861   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:43.778868   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:43.778926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:43.812497   65605 cri.go:89] found id: ""
	I0723 15:22:43.812528   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.812538   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:43.812544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:43.812595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:43.849568   65605 cri.go:89] found id: ""
	I0723 15:22:43.849595   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.849607   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:43.849621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:43.849784   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:43.883486   65605 cri.go:89] found id: ""
	I0723 15:22:43.883515   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.883527   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:43.883535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:43.883603   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:43.917301   65605 cri.go:89] found id: ""
	I0723 15:22:43.917321   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.917328   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:43.917333   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:43.917388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:43.951808   65605 cri.go:89] found id: ""
	I0723 15:22:43.951835   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.951844   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:43.951853   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:43.951864   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:44.001416   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:44.001448   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:44.014680   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:44.014708   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:44.086008   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:44.086033   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:44.086048   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:44.174647   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:44.174679   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:46.716916   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:46.730403   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:46.730473   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:46.765297   65605 cri.go:89] found id: ""
	I0723 15:22:46.765332   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.765348   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:46.765355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:46.765417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:46.798193   65605 cri.go:89] found id: ""
	I0723 15:22:46.798225   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.798235   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:46.798242   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:46.798309   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:46.830977   65605 cri.go:89] found id: ""
	I0723 15:22:46.831003   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.831015   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:46.831022   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:46.831093   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:44.135515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.633440   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.404399   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.655329   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.660172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.867414   65605 cri.go:89] found id: ""
	I0723 15:22:46.867441   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.867452   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:46.867459   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:46.867524   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:46.903782   65605 cri.go:89] found id: ""
	I0723 15:22:46.903810   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.903823   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:46.903830   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:46.903912   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:46.936451   65605 cri.go:89] found id: ""
	I0723 15:22:46.936479   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.936486   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:46.936491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:46.936538   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:46.970263   65605 cri.go:89] found id: ""
	I0723 15:22:46.970289   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.970297   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:46.970302   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:46.970370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:47.005023   65605 cri.go:89] found id: ""
	I0723 15:22:47.005055   65605 logs.go:276] 0 containers: []
	W0723 15:22:47.005065   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:47.005074   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:47.005087   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:47.102350   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:47.102398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:47.102432   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:47.194243   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:47.194277   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:47.235510   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:47.235543   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:47.285177   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:47.285208   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:49.799825   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:49.813159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:49.813218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:49.844937   65605 cri.go:89] found id: ""
	I0723 15:22:49.844966   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.844974   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:49.844979   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:49.845039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:49.880236   65605 cri.go:89] found id: ""
	I0723 15:22:49.880265   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.880276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:49.880283   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:49.880344   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:49.914260   65605 cri.go:89] found id: ""
	I0723 15:22:49.914289   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.914298   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:49.914306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:49.914360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:49.948948   65605 cri.go:89] found id: ""
	I0723 15:22:49.948979   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.948987   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:49.948994   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:49.949049   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:49.982841   65605 cri.go:89] found id: ""
	I0723 15:22:49.982867   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.982876   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:49.982881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:49.982926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:50.018255   65605 cri.go:89] found id: ""
	I0723 15:22:50.018286   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.018297   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:50.018315   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:50.018366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:50.054476   65605 cri.go:89] found id: ""
	I0723 15:22:50.054505   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.054515   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:50.054521   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:50.054582   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:50.088017   65605 cri.go:89] found id: ""
	I0723 15:22:50.088050   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.088060   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:50.088072   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:50.088086   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:50.140460   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:50.140494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:50.155334   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:50.155371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:50.230361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:50.230401   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:50.230419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:50.307742   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:50.307789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:48.635238   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.133390   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.406535   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:50.904921   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.905910   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.152686   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:53.153547   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.847520   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:52.868334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:52.868400   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:52.905903   65605 cri.go:89] found id: ""
	I0723 15:22:52.905930   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.905941   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:52.905948   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:52.906006   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:52.940644   65605 cri.go:89] found id: ""
	I0723 15:22:52.940672   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.940683   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:52.940690   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:52.940752   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:52.973581   65605 cri.go:89] found id: ""
	I0723 15:22:52.973607   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.973615   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:52.973621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:52.973682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:53.007004   65605 cri.go:89] found id: ""
	I0723 15:22:53.007032   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.007040   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:53.007046   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:53.007100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:53.040346   65605 cri.go:89] found id: ""
	I0723 15:22:53.040374   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.040385   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:53.040392   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:53.040455   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:53.073620   65605 cri.go:89] found id: ""
	I0723 15:22:53.073653   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.073662   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:53.073668   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:53.073717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:53.108895   65605 cri.go:89] found id: ""
	I0723 15:22:53.108929   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.108941   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:53.108949   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:53.109014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:53.144145   65605 cri.go:89] found id: ""
	I0723 15:22:53.144171   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.144179   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:53.144190   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:53.144207   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:53.181580   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:53.181617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:53.235261   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:53.235292   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:53.249317   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:53.249352   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:53.317382   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:53.317403   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:53.317419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:55.899766   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:55.913612   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:55.913685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:55.945832   65605 cri.go:89] found id: ""
	I0723 15:22:55.945865   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.945877   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:55.945884   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:55.945939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:55.977489   65605 cri.go:89] found id: ""
	I0723 15:22:55.977522   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.977533   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:55.977546   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:55.977607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:56.011727   65605 cri.go:89] found id: ""
	I0723 15:22:56.011758   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.011770   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:56.011781   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:56.011850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:56.044046   65605 cri.go:89] found id: ""
	I0723 15:22:56.044076   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.044086   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:56.044093   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:56.044148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:56.078615   65605 cri.go:89] found id: ""
	I0723 15:22:56.078638   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.078644   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:56.078649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:56.078702   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:56.112720   65605 cri.go:89] found id: ""
	I0723 15:22:56.112746   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.112754   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:56.112759   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:56.112807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:56.146436   65605 cri.go:89] found id: ""
	I0723 15:22:56.146464   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.146475   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:56.146483   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:56.146545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:56.179819   65605 cri.go:89] found id: ""
	I0723 15:22:56.179850   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.179859   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:56.179868   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:56.179885   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:56.219608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:56.219636   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:56.268158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:56.268192   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:56.281422   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:56.281449   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:56.351169   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:56.351190   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:56.351206   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:53.133444   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.632360   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.404787   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.905423   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.652504   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.655049   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:58.933585   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:58.946516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:58.946607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:58.980970   65605 cri.go:89] found id: ""
	I0723 15:22:58.980994   65605 logs.go:276] 0 containers: []
	W0723 15:22:58.981004   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:58.981012   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:58.981083   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:59.019301   65605 cri.go:89] found id: ""
	I0723 15:22:59.019337   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.019352   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:59.019360   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:59.019417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:59.053653   65605 cri.go:89] found id: ""
	I0723 15:22:59.053677   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.053685   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:59.053690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:59.053745   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:59.086737   65605 cri.go:89] found id: ""
	I0723 15:22:59.086764   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.086772   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:59.086778   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:59.086833   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:59.120689   65605 cri.go:89] found id: ""
	I0723 15:22:59.120717   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.120725   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:59.120731   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:59.120793   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:59.157267   65605 cri.go:89] found id: ""
	I0723 15:22:59.157305   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.157313   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:59.157319   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:59.157370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:59.193432   65605 cri.go:89] found id: ""
	I0723 15:22:59.193457   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.193468   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:59.193474   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:59.193518   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:59.227501   65605 cri.go:89] found id: ""
	I0723 15:22:59.227528   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.227535   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:59.227544   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:59.227555   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:59.314420   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:59.314465   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:59.354311   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:59.354354   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:59.406158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:59.406189   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:59.419244   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:59.419270   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:59.494399   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:57.632469   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:00.133084   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.905483   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.406340   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.154105   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.655454   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:01.995403   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:02.008395   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:02.008459   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:02.041952   65605 cri.go:89] found id: ""
	I0723 15:23:02.041979   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.041989   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:02.041995   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:02.042061   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:02.079353   65605 cri.go:89] found id: ""
	I0723 15:23:02.079383   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.079390   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:02.079397   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:02.079453   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:02.114222   65605 cri.go:89] found id: ""
	I0723 15:23:02.114251   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.114261   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:02.114269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:02.114350   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:02.146563   65605 cri.go:89] found id: ""
	I0723 15:23:02.146591   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.146603   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:02.146610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:02.146675   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:02.184401   65605 cri.go:89] found id: ""
	I0723 15:23:02.184428   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.184436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:02.184442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:02.184489   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:02.221304   65605 cri.go:89] found id: ""
	I0723 15:23:02.221339   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.221350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:02.221358   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:02.221424   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:02.266255   65605 cri.go:89] found id: ""
	I0723 15:23:02.266280   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.266288   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:02.266308   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:02.266364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:02.302038   65605 cri.go:89] found id: ""
	I0723 15:23:02.302064   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.302075   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:02.302085   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:02.302102   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.352709   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:02.352743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:02.366113   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:02.366141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:02.433621   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:02.433658   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:02.433674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:02.512443   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:02.512479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.051227   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:05.063634   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:05.063704   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:05.099833   65605 cri.go:89] found id: ""
	I0723 15:23:05.099862   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.099872   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:05.099880   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:05.099942   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:05.136009   65605 cri.go:89] found id: ""
	I0723 15:23:05.136030   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.136036   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:05.136042   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:05.136089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:05.171390   65605 cri.go:89] found id: ""
	I0723 15:23:05.171423   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.171434   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:05.171441   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:05.171497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:05.210193   65605 cri.go:89] found id: ""
	I0723 15:23:05.210220   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.210229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:05.210236   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:05.210318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:05.243266   65605 cri.go:89] found id: ""
	I0723 15:23:05.243290   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.243298   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:05.243304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:05.243368   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:05.273795   65605 cri.go:89] found id: ""
	I0723 15:23:05.273826   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.273835   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:05.273842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:05.273918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:05.305498   65605 cri.go:89] found id: ""
	I0723 15:23:05.305521   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.305528   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:05.305533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:05.305587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:05.337867   65605 cri.go:89] found id: ""
	I0723 15:23:05.337894   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.337905   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:05.337917   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:05.337934   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:05.353531   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:05.353564   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:05.419605   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:05.419630   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:05.419644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:05.503361   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:05.503395   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.539514   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:05.539547   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.633357   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.633516   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.904960   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.913789   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.657437   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.660064   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.091151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:08.103930   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:08.104007   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:08.136853   65605 cri.go:89] found id: ""
	I0723 15:23:08.136874   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.136881   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:08.136887   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:08.136940   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:08.171525   65605 cri.go:89] found id: ""
	I0723 15:23:08.171556   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.171577   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:08.171584   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:08.171652   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:08.205887   65605 cri.go:89] found id: ""
	I0723 15:23:08.205919   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.205930   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:08.205940   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:08.206001   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:08.238304   65605 cri.go:89] found id: ""
	I0723 15:23:08.238329   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.238337   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:08.238342   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:08.238411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:08.270162   65605 cri.go:89] found id: ""
	I0723 15:23:08.270194   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.270203   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:08.270211   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:08.270273   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:08.312963   65605 cri.go:89] found id: ""
	I0723 15:23:08.312991   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.312999   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:08.313005   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:08.313065   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:08.345211   65605 cri.go:89] found id: ""
	I0723 15:23:08.345246   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.345258   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:08.345267   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:08.345326   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:08.381355   65605 cri.go:89] found id: ""
	I0723 15:23:08.381390   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.381399   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:08.381409   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:08.381421   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.436680   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:08.436718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:08.450210   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:08.450245   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:08.517469   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:08.517490   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:08.517504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:08.603147   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:08.603185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:11.142363   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:11.158204   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:11.158278   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:11.197181   65605 cri.go:89] found id: ""
	I0723 15:23:11.197211   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.197227   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:11.197234   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:11.197302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:11.232698   65605 cri.go:89] found id: ""
	I0723 15:23:11.232726   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.232736   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:11.232742   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:11.232801   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:11.263268   65605 cri.go:89] found id: ""
	I0723 15:23:11.263293   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.263301   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:11.263306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:11.263363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:11.294213   65605 cri.go:89] found id: ""
	I0723 15:23:11.294242   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.294254   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:11.294261   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:11.294340   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:11.324721   65605 cri.go:89] found id: ""
	I0723 15:23:11.324753   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.324766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:11.324773   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:11.324834   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:11.356563   65605 cri.go:89] found id: ""
	I0723 15:23:11.356595   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.356606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:11.356620   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:11.356685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:11.387818   65605 cri.go:89] found id: ""
	I0723 15:23:11.387850   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.387859   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:11.387866   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:11.387926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:11.422612   65605 cri.go:89] found id: ""
	I0723 15:23:11.422639   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.422649   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:11.422659   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:11.422672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:11.475997   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:11.476028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:11.489064   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:11.489095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:11.557384   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:11.557408   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:11.557427   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:11.636906   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:11.636933   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:07.134834   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.636699   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.405125   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.406702   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.153281   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.153390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.154674   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.176790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:14.190898   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:14.190972   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:14.225264   65605 cri.go:89] found id: ""
	I0723 15:23:14.225297   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.225308   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:14.225314   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:14.225378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:14.257092   65605 cri.go:89] found id: ""
	I0723 15:23:14.257119   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.257132   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:14.257138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:14.257201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:14.291068   65605 cri.go:89] found id: ""
	I0723 15:23:14.291095   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.291104   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:14.291111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:14.291170   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:14.324840   65605 cri.go:89] found id: ""
	I0723 15:23:14.324872   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.324881   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:14.324888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:14.324948   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:14.358228   65605 cri.go:89] found id: ""
	I0723 15:23:14.358258   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.358268   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:14.358275   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:14.358333   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:14.389136   65605 cri.go:89] found id: ""
	I0723 15:23:14.389164   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.389174   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:14.389181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:14.389241   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:14.424386   65605 cri.go:89] found id: ""
	I0723 15:23:14.424413   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.424424   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:14.424432   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:14.424492   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:14.457206   65605 cri.go:89] found id: ""
	I0723 15:23:14.457234   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.457244   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:14.457254   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:14.457265   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:14.535708   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:14.535742   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.573579   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:14.573603   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:14.627966   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:14.627994   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:14.641305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:14.641332   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:14.723499   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:12.133966   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.633521   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:16.633785   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.905045   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.653465   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:19.653755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.224268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:17.236467   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:17.236530   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:17.269668   65605 cri.go:89] found id: ""
	I0723 15:23:17.269697   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.269704   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:17.269709   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:17.269753   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:17.300573   65605 cri.go:89] found id: ""
	I0723 15:23:17.300596   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.300603   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:17.300608   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:17.300655   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:17.332627   65605 cri.go:89] found id: ""
	I0723 15:23:17.332653   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.332661   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:17.332666   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:17.332716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:17.363759   65605 cri.go:89] found id: ""
	I0723 15:23:17.363786   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.363794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:17.363799   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:17.363854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:17.396986   65605 cri.go:89] found id: ""
	I0723 15:23:17.397016   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.397023   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:17.397031   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:17.397089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:17.435454   65605 cri.go:89] found id: ""
	I0723 15:23:17.435478   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.435488   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:17.435495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:17.435551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:17.469529   65605 cri.go:89] found id: ""
	I0723 15:23:17.469570   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.469581   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:17.469589   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:17.469654   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:17.505356   65605 cri.go:89] found id: ""
	I0723 15:23:17.505384   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.505395   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:17.505405   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:17.505420   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:17.548656   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:17.548682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:17.602439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:17.602471   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:17.614872   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:17.614902   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:17.684914   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.684939   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:17.684958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.271384   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:20.284619   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:20.284682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:20.319522   65605 cri.go:89] found id: ""
	I0723 15:23:20.319545   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.319552   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:20.319557   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:20.319608   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:20.357359   65605 cri.go:89] found id: ""
	I0723 15:23:20.357385   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.357393   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:20.357399   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:20.357444   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:20.390651   65605 cri.go:89] found id: ""
	I0723 15:23:20.390680   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.390692   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:20.390699   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:20.390757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:20.425243   65605 cri.go:89] found id: ""
	I0723 15:23:20.425274   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.425288   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:20.425295   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:20.425367   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:20.459665   65605 cri.go:89] found id: ""
	I0723 15:23:20.459687   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.459694   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:20.459700   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:20.459749   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:20.494836   65605 cri.go:89] found id: ""
	I0723 15:23:20.494869   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.494879   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:20.494887   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:20.494946   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:20.528807   65605 cri.go:89] found id: ""
	I0723 15:23:20.528839   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.528847   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:20.528854   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:20.528904   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:20.563111   65605 cri.go:89] found id: ""
	I0723 15:23:20.563139   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.563148   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:20.563160   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:20.563175   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:20.576259   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:20.576290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:20.641528   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:20.641551   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:20.641565   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.717413   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:20.717452   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:20.756832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:20.756858   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:19.133570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:21.133680   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:18.404406   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:20.405712   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.904785   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.153273   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:24.654959   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:23.308839   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:23.322122   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:23.322203   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:23.353454   65605 cri.go:89] found id: ""
	I0723 15:23:23.353483   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.353491   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:23.353496   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:23.353550   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:23.385194   65605 cri.go:89] found id: ""
	I0723 15:23:23.385218   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.385226   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:23.385231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:23.385286   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:23.420259   65605 cri.go:89] found id: ""
	I0723 15:23:23.420287   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.420295   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:23.420301   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:23.420366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:23.453107   65605 cri.go:89] found id: ""
	I0723 15:23:23.453134   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.453145   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:23.453152   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:23.453208   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:23.485147   65605 cri.go:89] found id: ""
	I0723 15:23:23.485178   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.485185   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:23.485191   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:23.485239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:23.516682   65605 cri.go:89] found id: ""
	I0723 15:23:23.516709   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.516721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:23.516729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:23.516855   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:23.552804   65605 cri.go:89] found id: ""
	I0723 15:23:23.552836   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.552846   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:23.552853   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:23.552916   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:23.585951   65605 cri.go:89] found id: ""
	I0723 15:23:23.585977   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.585988   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:23.586000   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:23.586014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.641439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:23.641469   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:23.655213   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:23.655243   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:23.726461   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:23.726482   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:23.726496   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:23.806530   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:23.806572   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.346727   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:26.359785   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:26.359854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:26.394547   65605 cri.go:89] found id: ""
	I0723 15:23:26.394583   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.394593   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:26.394600   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:26.394660   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:26.429602   65605 cri.go:89] found id: ""
	I0723 15:23:26.429632   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.429640   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:26.429646   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:26.429735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:26.461875   65605 cri.go:89] found id: ""
	I0723 15:23:26.461902   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.461909   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:26.461916   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:26.461987   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:26.494721   65605 cri.go:89] found id: ""
	I0723 15:23:26.494743   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.494751   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:26.494756   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:26.494802   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:26.530828   65605 cri.go:89] found id: ""
	I0723 15:23:26.530854   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.530863   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:26.530871   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:26.530939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:26.564508   65605 cri.go:89] found id: ""
	I0723 15:23:26.564540   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.564551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:26.564558   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:26.564618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:26.599354   65605 cri.go:89] found id: ""
	I0723 15:23:26.599378   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.599387   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:26.599393   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:26.599460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:26.654360   65605 cri.go:89] found id: ""
	I0723 15:23:26.654409   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.654420   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:26.654429   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:26.654446   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:26.722180   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:26.722212   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:26.722226   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:26.803291   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:26.803324   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.842829   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:26.842860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.633887   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:25.406139   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:27.905699   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.656334   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:29.153898   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.896814   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:26.896854   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.411463   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:29.424509   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:29.424574   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:29.458014   65605 cri.go:89] found id: ""
	I0723 15:23:29.458042   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.458049   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:29.458055   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:29.458108   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:29.492762   65605 cri.go:89] found id: ""
	I0723 15:23:29.492792   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.492802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:29.492809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:29.492862   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:29.526807   65605 cri.go:89] found id: ""
	I0723 15:23:29.526840   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.526851   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:29.526858   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:29.526922   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:29.560110   65605 cri.go:89] found id: ""
	I0723 15:23:29.560133   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.560140   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:29.560146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:29.560195   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:29.596287   65605 cri.go:89] found id: ""
	I0723 15:23:29.596317   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.596327   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:29.596334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:29.596389   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:29.629292   65605 cri.go:89] found id: ""
	I0723 15:23:29.629338   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.629345   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:29.629353   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:29.629404   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:29.666018   65605 cri.go:89] found id: ""
	I0723 15:23:29.666048   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.666058   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:29.666065   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:29.666131   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:29.699967   65605 cri.go:89] found id: ""
	I0723 15:23:29.699996   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.700006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:29.700018   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:29.700034   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:29.749759   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:29.749792   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.763116   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:29.763142   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:29.836309   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:29.836332   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:29.836343   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:29.916337   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:29.916371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:28.633677   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.132726   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:30.405168   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.905063   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.653297   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:33.653432   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.463927   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:32.477072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:32.477150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:32.509915   65605 cri.go:89] found id: ""
	I0723 15:23:32.509938   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.509945   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:32.509952   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:32.510000   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:32.543302   65605 cri.go:89] found id: ""
	I0723 15:23:32.543344   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.543360   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:32.543368   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:32.543438   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:32.579516   65605 cri.go:89] found id: ""
	I0723 15:23:32.579544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.579555   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:32.579562   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:32.579621   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:32.613175   65605 cri.go:89] found id: ""
	I0723 15:23:32.613210   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.613218   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:32.613224   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:32.613282   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:32.646801   65605 cri.go:89] found id: ""
	I0723 15:23:32.646826   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.646835   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:32.646842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:32.646902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:32.683518   65605 cri.go:89] found id: ""
	I0723 15:23:32.683544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.683551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:32.683556   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:32.683611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:32.719448   65605 cri.go:89] found id: ""
	I0723 15:23:32.719475   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.719485   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:32.719490   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:32.719568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:32.752706   65605 cri.go:89] found id: ""
	I0723 15:23:32.752731   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.752738   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:32.752747   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:32.752757   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.800191   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:32.800220   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:32.850990   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:32.851025   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:32.863700   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:32.863729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:32.928054   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:32.928080   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:32.928095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:35.507452   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:35.520681   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:35.520760   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:35.554642   65605 cri.go:89] found id: ""
	I0723 15:23:35.554668   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.554680   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:35.554687   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:35.554750   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:35.585970   65605 cri.go:89] found id: ""
	I0723 15:23:35.585994   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.586004   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:35.586011   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:35.586069   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:35.625178   65605 cri.go:89] found id: ""
	I0723 15:23:35.625202   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.625212   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:35.625226   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:35.625274   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:35.658618   65605 cri.go:89] found id: ""
	I0723 15:23:35.658647   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.658666   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:35.658682   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:35.658742   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:35.696724   65605 cri.go:89] found id: ""
	I0723 15:23:35.696760   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.696768   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:35.696774   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:35.696825   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:35.728399   65605 cri.go:89] found id: ""
	I0723 15:23:35.728426   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.728435   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:35.728440   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:35.728496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:35.758374   65605 cri.go:89] found id: ""
	I0723 15:23:35.758419   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.758429   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:35.758436   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:35.758497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:35.789013   65605 cri.go:89] found id: ""
	I0723 15:23:35.789041   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.789050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:35.789058   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:35.789069   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:35.843703   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:35.843739   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:35.856489   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:35.856514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:35.926784   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:35.926804   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:35.926819   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:36.009552   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:36.009591   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:33.632247   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.633037   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.404984   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:37.905720   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.653742   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.154008   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.545830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:38.560412   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:38.560491   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:38.596495   65605 cri.go:89] found id: ""
	I0723 15:23:38.596521   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.596532   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:38.596538   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:38.596587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:38.635068   65605 cri.go:89] found id: ""
	I0723 15:23:38.635095   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.635104   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:38.635109   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:38.635180   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:38.675832   65605 cri.go:89] found id: ""
	I0723 15:23:38.675876   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.675891   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:38.675897   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:38.675956   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:38.711052   65605 cri.go:89] found id: ""
	I0723 15:23:38.711080   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.711100   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:38.711108   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:38.711171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:38.749437   65605 cri.go:89] found id: ""
	I0723 15:23:38.749479   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.749490   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:38.749498   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:38.749554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:38.790721   65605 cri.go:89] found id: ""
	I0723 15:23:38.790743   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.790751   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:38.790758   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:38.790818   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:38.840127   65605 cri.go:89] found id: ""
	I0723 15:23:38.840156   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.840167   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:38.840174   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:38.840233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:38.895252   65605 cri.go:89] found id: ""
	I0723 15:23:38.895281   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.895291   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:38.895301   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:38.895317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.933441   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:38.933479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:38.987128   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:38.987160   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:39.001547   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:39.001578   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:39.070363   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:39.070398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:39.070413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:41.648668   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:41.664247   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:41.664303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:41.697926   65605 cri.go:89] found id: ""
	I0723 15:23:41.697954   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.697962   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:41.697967   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:41.698014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:41.735306   65605 cri.go:89] found id: ""
	I0723 15:23:41.735336   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.735347   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:41.735355   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:41.735413   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:41.773005   65605 cri.go:89] found id: ""
	I0723 15:23:41.773030   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.773040   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:41.773047   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:41.773105   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:41.806683   65605 cri.go:89] found id: ""
	I0723 15:23:41.806711   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.806722   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:41.806729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:41.806779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:41.842021   65605 cri.go:89] found id: ""
	I0723 15:23:41.842047   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.842063   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:41.842070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:41.842130   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:37.633918   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.132895   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:39.906489   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.405244   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.652778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.656127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:45.155065   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:41.874772   65605 cri.go:89] found id: ""
	I0723 15:23:41.874802   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.874812   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:41.874819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:41.874883   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:41.908618   65605 cri.go:89] found id: ""
	I0723 15:23:41.908643   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.908651   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:41.908656   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:41.908705   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:41.942529   65605 cri.go:89] found id: ""
	I0723 15:23:41.942562   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.942573   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:41.942586   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:41.942601   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:41.995763   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:41.995820   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:42.009263   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:42.009290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:42.076948   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:42.076970   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:42.076989   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:42.157399   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:42.157442   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:44.699439   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:44.712779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:44.712850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:44.746666   65605 cri.go:89] found id: ""
	I0723 15:23:44.746692   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.746701   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:44.746713   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:44.746775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:44.780144   65605 cri.go:89] found id: ""
	I0723 15:23:44.780171   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.780178   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:44.780184   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:44.780240   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:44.816646   65605 cri.go:89] found id: ""
	I0723 15:23:44.816676   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.816688   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:44.816696   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:44.816830   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:44.848830   65605 cri.go:89] found id: ""
	I0723 15:23:44.848860   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.848873   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:44.848880   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:44.848945   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:44.882216   65605 cri.go:89] found id: ""
	I0723 15:23:44.882252   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.882265   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:44.882274   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:44.882363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:44.915894   65605 cri.go:89] found id: ""
	I0723 15:23:44.915921   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.915930   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:44.915937   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:44.916003   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:44.948902   65605 cri.go:89] found id: ""
	I0723 15:23:44.948936   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.948954   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:44.948964   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:44.949034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:44.981658   65605 cri.go:89] found id: ""
	I0723 15:23:44.981685   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.981698   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:44.981709   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:44.981724   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:45.034030   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:45.034063   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:45.047545   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:45.047577   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:45.113885   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:45.113905   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:45.113917   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:45.195865   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:45.195907   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:42.133464   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.633278   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.633730   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.406233   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.904918   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.156318   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:49.653208   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.740466   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:47.752890   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:47.752958   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:47.786124   65605 cri.go:89] found id: ""
	I0723 15:23:47.786149   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.786157   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:47.786162   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:47.786211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:47.818051   65605 cri.go:89] found id: ""
	I0723 15:23:47.818073   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.818081   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:47.818086   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:47.818134   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:47.854144   65605 cri.go:89] found id: ""
	I0723 15:23:47.854168   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.854176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:47.854181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:47.854226   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:47.885781   65605 cri.go:89] found id: ""
	I0723 15:23:47.885809   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.885819   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:47.885826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:47.885888   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:47.917809   65605 cri.go:89] found id: ""
	I0723 15:23:47.917840   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.917850   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:47.917857   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:47.917921   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:47.950041   65605 cri.go:89] found id: ""
	I0723 15:23:47.950069   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.950078   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:47.950085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:47.950145   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:47.983108   65605 cri.go:89] found id: ""
	I0723 15:23:47.983143   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.983154   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:47.983163   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:47.983232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:48.014560   65605 cri.go:89] found id: ""
	I0723 15:23:48.014604   65605 logs.go:276] 0 containers: []
	W0723 15:23:48.014612   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:48.014621   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:48.014638   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:48.027469   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:48.027494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:48.097571   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:48.097601   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:48.097615   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:48.178586   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:48.178618   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:48.215769   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:48.215794   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:50.768087   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:50.781396   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:50.781467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:50.817297   65605 cri.go:89] found id: ""
	I0723 15:23:50.817327   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.817335   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:50.817341   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:50.817388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:50.850439   65605 cri.go:89] found id: ""
	I0723 15:23:50.850467   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.850476   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:50.850483   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:50.850552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:50.884601   65605 cri.go:89] found id: ""
	I0723 15:23:50.884630   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.884641   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:50.884649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:50.884714   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:50.918971   65605 cri.go:89] found id: ""
	I0723 15:23:50.918996   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.919004   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:50.919010   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:50.919072   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:50.951244   65605 cri.go:89] found id: ""
	I0723 15:23:50.951277   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.951284   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:50.951290   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:50.951360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:50.983289   65605 cri.go:89] found id: ""
	I0723 15:23:50.983326   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.983334   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:50.983339   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:50.983392   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:51.019584   65605 cri.go:89] found id: ""
	I0723 15:23:51.019614   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.019624   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:51.019631   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:51.019693   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:51.050981   65605 cri.go:89] found id: ""
	I0723 15:23:51.051005   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.051014   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:51.051023   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:51.051038   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:51.088826   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:51.088852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:51.141369   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:51.141401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:51.155419   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:51.155450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:51.222640   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:51.222662   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:51.222675   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:49.133154   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.632559   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:48.905876   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.404543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.654814   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:54.153611   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.802706   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:53.815926   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:53.815985   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:53.847867   65605 cri.go:89] found id: ""
	I0723 15:23:53.847900   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.847913   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:53.847921   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:53.847981   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:53.881461   65605 cri.go:89] found id: ""
	I0723 15:23:53.881489   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.881499   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:53.881506   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:53.881569   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:53.921025   65605 cri.go:89] found id: ""
	I0723 15:23:53.921059   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.921070   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:53.921076   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:53.921135   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:53.955219   65605 cri.go:89] found id: ""
	I0723 15:23:53.955242   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.955250   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:53.955255   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:53.955318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:53.991874   65605 cri.go:89] found id: ""
	I0723 15:23:53.991905   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.991915   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:53.991922   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:53.991986   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:54.024702   65605 cri.go:89] found id: ""
	I0723 15:23:54.024735   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.024745   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:54.024752   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:54.024819   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:54.063778   65605 cri.go:89] found id: ""
	I0723 15:23:54.063801   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.063808   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:54.063813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:54.063861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:54.098194   65605 cri.go:89] found id: ""
	I0723 15:23:54.098222   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.098232   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:54.098244   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:54.098258   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:54.148576   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:54.148617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:54.162561   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:54.162596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:54.236614   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:54.236647   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:54.236663   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:54.315900   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:54.315932   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:53.632910   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.633683   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.404873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.904545   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:57.904874   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.153719   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:58.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.853674   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:56.867190   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:56.867270   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:56.901757   65605 cri.go:89] found id: ""
	I0723 15:23:56.901782   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.901792   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:56.901799   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:56.901858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:56.943877   65605 cri.go:89] found id: ""
	I0723 15:23:56.943909   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.943920   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:56.943926   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:56.943983   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:56.977156   65605 cri.go:89] found id: ""
	I0723 15:23:56.977186   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.977194   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:56.977200   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:56.977260   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:57.009251   65605 cri.go:89] found id: ""
	I0723 15:23:57.009280   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.009290   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:57.009297   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:57.009362   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:57.041196   65605 cri.go:89] found id: ""
	I0723 15:23:57.041225   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.041236   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:57.041243   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:57.041295   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:57.081725   65605 cri.go:89] found id: ""
	I0723 15:23:57.081752   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.081760   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:57.081765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:57.081810   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:57.114457   65605 cri.go:89] found id: ""
	I0723 15:23:57.114482   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.114490   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:57.114495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:57.114551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:57.149775   65605 cri.go:89] found id: ""
	I0723 15:23:57.149803   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.149814   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:57.149824   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:57.149838   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:57.197984   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:57.198014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:57.210717   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:57.210743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:57.271374   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:57.271392   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:57.271403   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:57.346151   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:57.346185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:59.882368   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:59.895184   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:59.895257   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:59.928859   65605 cri.go:89] found id: ""
	I0723 15:23:59.928891   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.928902   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:59.928909   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:59.928967   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:59.962441   65605 cri.go:89] found id: ""
	I0723 15:23:59.962472   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.962483   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:59.962491   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:59.962570   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:59.996637   65605 cri.go:89] found id: ""
	I0723 15:23:59.996659   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.996667   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:59.996672   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:59.996720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:00.029291   65605 cri.go:89] found id: ""
	I0723 15:24:00.029320   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.029330   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:00.029338   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:00.029387   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:00.060869   65605 cri.go:89] found id: ""
	I0723 15:24:00.060898   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.060907   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:00.060912   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:00.060993   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:00.092010   65605 cri.go:89] found id: ""
	I0723 15:24:00.092042   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.092054   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:00.092063   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:00.092128   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:00.124914   65605 cri.go:89] found id: ""
	I0723 15:24:00.124940   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.124949   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:00.124955   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:00.125016   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:00.159927   65605 cri.go:89] found id: ""
	I0723 15:24:00.159953   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.159962   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:00.159977   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:00.159993   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:00.209719   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:00.209764   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:00.224757   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:00.224784   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:00.292079   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:00.292100   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:00.292113   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:00.377382   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:00.377415   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:58.132374   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.133083   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:59.906087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.404839   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.655745   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.658870   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.153217   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.916818   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:02.931524   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:02.931594   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:02.966440   65605 cri.go:89] found id: ""
	I0723 15:24:02.966462   65605 logs.go:276] 0 containers: []
	W0723 15:24:02.966470   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:02.966475   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:02.966525   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:03.000833   65605 cri.go:89] found id: ""
	I0723 15:24:03.000857   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.000865   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:03.000870   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:03.000918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:03.035531   65605 cri.go:89] found id: ""
	I0723 15:24:03.035559   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.035570   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:03.035577   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:03.035636   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:03.068376   65605 cri.go:89] found id: ""
	I0723 15:24:03.068401   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.068411   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:03.068418   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:03.068479   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:03.102499   65605 cri.go:89] found id: ""
	I0723 15:24:03.102532   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.102543   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:03.102549   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:03.102600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:03.137173   65605 cri.go:89] found id: ""
	I0723 15:24:03.137198   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.137207   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:03.137215   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:03.137259   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:03.170652   65605 cri.go:89] found id: ""
	I0723 15:24:03.170677   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.170685   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:03.170690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:03.170748   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:03.204828   65605 cri.go:89] found id: ""
	I0723 15:24:03.204855   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.204864   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:03.204875   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:03.204895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:03.287370   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:03.287413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:03.323855   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:03.323888   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:03.379809   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:03.379846   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:03.392944   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:03.392971   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:03.465681   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:05.966635   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:05.979888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:05.979949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:06.013706   65605 cri.go:89] found id: ""
	I0723 15:24:06.013733   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.013740   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:06.013746   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:06.013794   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:06.046584   65605 cri.go:89] found id: ""
	I0723 15:24:06.046612   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.046622   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:06.046630   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:06.046690   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:06.077379   65605 cri.go:89] found id: ""
	I0723 15:24:06.077407   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.077416   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:06.077422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:06.077488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:06.108946   65605 cri.go:89] found id: ""
	I0723 15:24:06.108975   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.108986   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:06.108993   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:06.109058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:06.143082   65605 cri.go:89] found id: ""
	I0723 15:24:06.143115   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.143123   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:06.143129   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:06.143178   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:06.182735   65605 cri.go:89] found id: ""
	I0723 15:24:06.182762   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.182772   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:06.182779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:06.182839   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:06.217613   65605 cri.go:89] found id: ""
	I0723 15:24:06.217640   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.217650   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:06.217657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:06.217720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:06.252739   65605 cri.go:89] found id: ""
	I0723 15:24:06.252775   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.252787   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:06.252800   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:06.252814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:06.304325   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:06.304358   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:06.317426   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:06.317450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:06.384284   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:06.384313   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:06.384329   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:06.460936   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:06.460974   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:02.632839   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.132547   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:04.404942   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:06.406131   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:07.153476   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.154627   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.000304   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:09.013544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:09.013618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:09.046414   65605 cri.go:89] found id: ""
	I0723 15:24:09.046442   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.046452   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:09.046459   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:09.046522   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:09.083183   65605 cri.go:89] found id: ""
	I0723 15:24:09.083214   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.083225   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:09.083231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:09.083292   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:09.117524   65605 cri.go:89] found id: ""
	I0723 15:24:09.117568   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.117578   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:09.117585   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:09.117647   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:09.152624   65605 cri.go:89] found id: ""
	I0723 15:24:09.152652   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.152667   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:09.152674   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:09.152735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:09.186918   65605 cri.go:89] found id: ""
	I0723 15:24:09.186943   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.186951   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:09.186957   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:09.187017   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:09.219857   65605 cri.go:89] found id: ""
	I0723 15:24:09.219889   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.219909   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:09.219917   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:09.219980   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:09.253364   65605 cri.go:89] found id: ""
	I0723 15:24:09.253392   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.253402   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:09.253409   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:09.253469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:09.285049   65605 cri.go:89] found id: ""
	I0723 15:24:09.285072   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.285079   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:09.285088   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:09.285099   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:09.336011   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:09.336046   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:09.349643   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:09.349672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:09.428156   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:09.428181   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:09.428200   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:09.513917   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:09.513977   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:07.632840   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.636373   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:08.904674   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.405130   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.653749   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.153549   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:12.053554   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:12.067177   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:12.067242   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:12.097265   65605 cri.go:89] found id: ""
	I0723 15:24:12.097289   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.097298   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:12.097305   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:12.097378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:12.129832   65605 cri.go:89] found id: ""
	I0723 15:24:12.129858   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.129868   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:12.129876   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:12.129938   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:12.164173   65605 cri.go:89] found id: ""
	I0723 15:24:12.164202   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.164213   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:12.164221   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:12.164275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:12.196604   65605 cri.go:89] found id: ""
	I0723 15:24:12.196637   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.196648   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:12.196655   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:12.196725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:12.239120   65605 cri.go:89] found id: ""
	I0723 15:24:12.239149   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.239158   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:12.239164   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:12.239232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:12.273806   65605 cri.go:89] found id: ""
	I0723 15:24:12.273836   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.273847   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:12.273855   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:12.273908   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:12.305937   65605 cri.go:89] found id: ""
	I0723 15:24:12.305965   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.305976   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:12.305984   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:12.306045   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:12.337795   65605 cri.go:89] found id: ""
	I0723 15:24:12.337822   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.337830   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:12.337839   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:12.337850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:12.390476   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:12.390512   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:12.405397   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:12.405422   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:12.474687   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:12.474711   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:12.474730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:12.551302   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:12.551341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:15.094530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:15.108194   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:15.108267   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:15.141068   65605 cri.go:89] found id: ""
	I0723 15:24:15.141095   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.141103   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:15.141109   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:15.141167   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:15.176226   65605 cri.go:89] found id: ""
	I0723 15:24:15.176260   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.176276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:15.176284   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:15.176348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:15.209086   65605 cri.go:89] found id: ""
	I0723 15:24:15.209115   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.209123   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:15.209128   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:15.209175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:15.245808   65605 cri.go:89] found id: ""
	I0723 15:24:15.245842   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.245853   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:15.245863   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:15.245926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:15.277680   65605 cri.go:89] found id: ""
	I0723 15:24:15.277710   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.277720   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:15.277728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:15.277789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:15.308419   65605 cri.go:89] found id: ""
	I0723 15:24:15.308443   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.308450   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:15.308456   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:15.308515   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:15.340785   65605 cri.go:89] found id: ""
	I0723 15:24:15.340812   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.340820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:15.340825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:15.340871   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:15.376014   65605 cri.go:89] found id: ""
	I0723 15:24:15.376040   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.376050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:15.376061   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:15.376074   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:15.427672   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:15.427706   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:15.441726   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:15.441755   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:15.508628   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:15.508659   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:15.508674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:15.589246   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:15.589284   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:12.133283   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.632399   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:13.905548   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.405913   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.652810   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.653725   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.128036   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:18.141529   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:18.141604   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:18.176401   65605 cri.go:89] found id: ""
	I0723 15:24:18.176434   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.176446   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:18.176453   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:18.176507   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:18.209833   65605 cri.go:89] found id: ""
	I0723 15:24:18.209868   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.209878   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:18.209886   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:18.209949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:18.243094   65605 cri.go:89] found id: ""
	I0723 15:24:18.243129   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.243139   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:18.243146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:18.243211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:18.275929   65605 cri.go:89] found id: ""
	I0723 15:24:18.275957   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.275968   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:18.275980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:18.276037   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:18.309064   65605 cri.go:89] found id: ""
	I0723 15:24:18.309095   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.309103   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:18.309109   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:18.309171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:18.345446   65605 cri.go:89] found id: ""
	I0723 15:24:18.345475   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.345485   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:18.345491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:18.345552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:18.381774   65605 cri.go:89] found id: ""
	I0723 15:24:18.381808   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.381820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:18.381827   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:18.381881   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:18.435663   65605 cri.go:89] found id: ""
	I0723 15:24:18.435692   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.435706   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:18.435716   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:18.435729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.471152   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:18.471184   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:18.523114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:18.523146   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:18.536555   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:18.536594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:18.607773   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:18.607792   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:18.607803   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.192781   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:21.205337   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:21.205403   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:21.242125   65605 cri.go:89] found id: ""
	I0723 15:24:21.242155   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.242163   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:21.242170   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:21.242243   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:21.279245   65605 cri.go:89] found id: ""
	I0723 15:24:21.279274   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.279286   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:21.279295   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:21.279361   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:21.311316   65605 cri.go:89] found id: ""
	I0723 15:24:21.311340   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.311348   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:21.311355   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:21.311415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:21.344444   65605 cri.go:89] found id: ""
	I0723 15:24:21.344468   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.344478   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:21.344485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:21.344545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:21.381055   65605 cri.go:89] found id: ""
	I0723 15:24:21.381082   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.381092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:21.381099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:21.381158   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:21.416593   65605 cri.go:89] found id: ""
	I0723 15:24:21.416621   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.416633   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:21.416643   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:21.416706   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:21.448345   65605 cri.go:89] found id: ""
	I0723 15:24:21.448368   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.448377   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:21.448382   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:21.448426   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:21.481810   65605 cri.go:89] found id: ""
	I0723 15:24:21.481836   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.481843   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:21.481852   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:21.481874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:21.545200   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:21.545227   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:21.545244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.626037   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:21.626073   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:21.667961   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:21.667998   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:21.718622   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:21.718662   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:17.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:19.632774   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.632954   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.905257   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:20.906323   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.153330   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.153495   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:24.233086   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:24.247111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:24.247175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:24.281818   65605 cri.go:89] found id: ""
	I0723 15:24:24.281850   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.281861   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:24.281868   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:24.281924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:24.315621   65605 cri.go:89] found id: ""
	I0723 15:24:24.315647   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.315656   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:24.315664   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:24.315722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:24.350355   65605 cri.go:89] found id: ""
	I0723 15:24:24.350400   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.350410   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:24.350417   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:24.350498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:24.384584   65605 cri.go:89] found id: ""
	I0723 15:24:24.384611   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.384619   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:24.384625   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:24.384671   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:24.423669   65605 cri.go:89] found id: ""
	I0723 15:24:24.423694   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.423701   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:24.423707   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:24.423754   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:24.456572   65605 cri.go:89] found id: ""
	I0723 15:24:24.456599   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.456606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:24.456611   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:24.456659   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:24.488024   65605 cri.go:89] found id: ""
	I0723 15:24:24.488047   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.488055   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:24.488061   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:24.488109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:24.519311   65605 cri.go:89] found id: ""
	I0723 15:24:24.519344   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.519352   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:24.519360   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:24.519371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:24.568552   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:24.568594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.581845   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:24.581874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:24.650455   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:24.650478   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:24.650492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:24.728143   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:24.728179   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:23.633012   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:26.132417   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.405046   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.906015   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.654555   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.152778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.268112   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:27.281947   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:27.282025   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:27.315489   65605 cri.go:89] found id: ""
	I0723 15:24:27.315517   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.315528   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:27.315536   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:27.315599   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:27.348481   65605 cri.go:89] found id: ""
	I0723 15:24:27.348509   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.348519   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:27.348526   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:27.348580   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:27.380628   65605 cri.go:89] found id: ""
	I0723 15:24:27.380659   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.380668   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:27.380673   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:27.380731   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:27.413647   65605 cri.go:89] found id: ""
	I0723 15:24:27.413679   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.413688   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:27.413693   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:27.413744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:27.450398   65605 cri.go:89] found id: ""
	I0723 15:24:27.450425   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.450436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:27.450442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:27.450494   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:27.489071   65605 cri.go:89] found id: ""
	I0723 15:24:27.489101   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.489117   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:27.489125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:27.489190   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:27.529785   65605 cri.go:89] found id: ""
	I0723 15:24:27.529813   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.529823   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:27.529829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:27.529876   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:27.560811   65605 cri.go:89] found id: ""
	I0723 15:24:27.560843   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.560855   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:27.560866   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:27.560882   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:27.574078   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:27.574100   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:27.636153   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:27.636179   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:27.636194   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:27.714001   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:27.714041   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.751396   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:27.751428   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.307581   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:30.319762   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:30.319823   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:30.354317   65605 cri.go:89] found id: ""
	I0723 15:24:30.354341   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.354349   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:30.354355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:30.354429   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:30.389994   65605 cri.go:89] found id: ""
	I0723 15:24:30.390026   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.390039   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:30.390048   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:30.390122   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:30.428854   65605 cri.go:89] found id: ""
	I0723 15:24:30.428878   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.428887   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:30.428893   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:30.428966   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:30.461727   65605 cri.go:89] found id: ""
	I0723 15:24:30.461752   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.461759   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:30.461765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:30.461813   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:30.494777   65605 cri.go:89] found id: ""
	I0723 15:24:30.494799   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.494807   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:30.494813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:30.494858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:30.531918   65605 cri.go:89] found id: ""
	I0723 15:24:30.531943   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.531954   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:30.531960   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:30.532034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:30.590683   65605 cri.go:89] found id: ""
	I0723 15:24:30.590710   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.590720   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:30.590727   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:30.590772   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:30.636073   65605 cri.go:89] found id: ""
	I0723 15:24:30.636104   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.636114   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:30.636124   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:30.636138   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.686233   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:30.686268   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:30.700266   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:30.700308   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:30.773850   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:30.773868   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:30.773879   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:30.854428   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:30.854464   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:28.633061   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.633604   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:28.404488   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.406038   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.905405   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.653390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.153739   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:33.393374   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:33.406722   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:33.406779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:33.440555   65605 cri.go:89] found id: ""
	I0723 15:24:33.440585   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.440596   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:33.440604   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:33.440666   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:33.473363   65605 cri.go:89] found id: ""
	I0723 15:24:33.473389   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.473398   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:33.473405   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:33.473469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:33.509772   65605 cri.go:89] found id: ""
	I0723 15:24:33.509805   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.509816   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:33.509829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:33.509896   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:33.546578   65605 cri.go:89] found id: ""
	I0723 15:24:33.546605   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.546613   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:33.546618   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:33.546686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:33.582735   65605 cri.go:89] found id: ""
	I0723 15:24:33.582759   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.582766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:33.582771   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:33.582831   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:33.619013   65605 cri.go:89] found id: ""
	I0723 15:24:33.619039   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.619048   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:33.619053   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:33.619110   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:33.655967   65605 cri.go:89] found id: ""
	I0723 15:24:33.655988   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.655995   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:33.656001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:33.656058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:33.694266   65605 cri.go:89] found id: ""
	I0723 15:24:33.694303   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.694311   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:33.694319   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:33.694330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:33.744464   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:33.744504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:33.759314   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:33.759342   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:33.832308   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:33.832331   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:33.832364   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:33.910820   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:33.910860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.452804   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:36.465137   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:36.465224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:36.504340   65605 cri.go:89] found id: ""
	I0723 15:24:36.504371   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.504380   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:36.504385   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:36.504436   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:36.539113   65605 cri.go:89] found id: ""
	I0723 15:24:36.539138   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.539147   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:36.539154   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:36.539215   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:36.572443   65605 cri.go:89] found id: ""
	I0723 15:24:36.572468   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.572478   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:36.572485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:36.572540   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:36.605366   65605 cri.go:89] found id: ""
	I0723 15:24:36.605391   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.605398   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:36.605404   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:36.605467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:36.637467   65605 cri.go:89] found id: ""
	I0723 15:24:36.637496   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.637506   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:36.637513   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:36.637576   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:36.674630   65605 cri.go:89] found id: ""
	I0723 15:24:36.674652   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.674661   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:36.674669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:36.674722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:36.707409   65605 cri.go:89] found id: ""
	I0723 15:24:36.707500   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.707511   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:36.707525   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:36.707581   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:36.742746   65605 cri.go:89] found id: ""
	I0723 15:24:36.742771   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.742778   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:36.742786   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:36.742800   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.776474   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:36.776498   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:36.826256   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:36.826289   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:36.839568   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:36.839596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:24:33.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.632486   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.405071   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.406177   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.653785   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.654028   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	W0723 15:24:36.906055   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:36.906082   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:36.906095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:39.483791   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:39.496085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:39.496150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:39.527545   65605 cri.go:89] found id: ""
	I0723 15:24:39.527573   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.527583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:39.527590   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:39.527653   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:39.562024   65605 cri.go:89] found id: ""
	I0723 15:24:39.562051   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.562060   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:39.562066   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:39.562115   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:39.600294   65605 cri.go:89] found id: ""
	I0723 15:24:39.600317   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.600324   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:39.600329   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:39.600378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:39.635629   65605 cri.go:89] found id: ""
	I0723 15:24:39.635653   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.635663   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:39.635669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:39.635729   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:39.672815   65605 cri.go:89] found id: ""
	I0723 15:24:39.672843   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.672854   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:39.672861   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:39.672924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:39.705965   65605 cri.go:89] found id: ""
	I0723 15:24:39.705999   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.706009   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:39.706023   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:39.706077   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:39.739262   65605 cri.go:89] found id: ""
	I0723 15:24:39.739288   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.739298   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:39.739304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:39.739373   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:39.771786   65605 cri.go:89] found id: ""
	I0723 15:24:39.771811   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.771820   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:39.771831   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:39.771844   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:39.813596   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:39.813628   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:39.861596   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:39.861629   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:39.875843   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:39.875867   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:39.947917   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:39.947941   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:39.947958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:38.135033   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:40.633462   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.906043   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.404845   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.153505   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:44.154094   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.530636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:42.543636   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:42.543718   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:42.576613   65605 cri.go:89] found id: ""
	I0723 15:24:42.576642   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.576652   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:42.576659   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:42.576723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:42.611422   65605 cri.go:89] found id: ""
	I0723 15:24:42.611452   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.611460   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:42.611465   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:42.611514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:42.647346   65605 cri.go:89] found id: ""
	I0723 15:24:42.647370   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.647380   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:42.647386   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:42.647447   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:42.683587   65605 cri.go:89] found id: ""
	I0723 15:24:42.683614   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.683622   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:42.683627   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:42.683673   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:42.715688   65605 cri.go:89] found id: ""
	I0723 15:24:42.715709   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.715717   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:42.715723   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:42.715775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:42.749589   65605 cri.go:89] found id: ""
	I0723 15:24:42.749624   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.749632   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:42.749637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:42.749684   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:42.786668   65605 cri.go:89] found id: ""
	I0723 15:24:42.786694   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.786702   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:42.786708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:42.786757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:42.821541   65605 cri.go:89] found id: ""
	I0723 15:24:42.821574   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.821585   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:42.821597   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:42.821612   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:42.873689   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:42.873720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:42.886689   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:42.886719   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:42.958057   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:42.958078   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:42.958093   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:43.042738   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:43.042771   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:45.580764   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:45.593331   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:45.593402   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:45.632356   65605 cri.go:89] found id: ""
	I0723 15:24:45.632386   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.632397   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:45.632404   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:45.632460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:45.674319   65605 cri.go:89] found id: ""
	I0723 15:24:45.674353   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.674363   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:45.674371   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:45.674450   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:45.718577   65605 cri.go:89] found id: ""
	I0723 15:24:45.718608   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.718616   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:45.718622   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:45.718686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:45.758866   65605 cri.go:89] found id: ""
	I0723 15:24:45.758894   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.758901   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:45.758907   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:45.758954   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:45.795098   65605 cri.go:89] found id: ""
	I0723 15:24:45.795124   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.795134   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:45.795148   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:45.795224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:45.832205   65605 cri.go:89] found id: ""
	I0723 15:24:45.832236   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.832257   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:45.832266   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:45.832348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:45.867679   65605 cri.go:89] found id: ""
	I0723 15:24:45.867713   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.867725   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:45.867733   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:45.867799   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:45.904960   65605 cri.go:89] found id: ""
	I0723 15:24:45.904999   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.905010   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:45.905022   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:45.905036   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:45.962373   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:45.962434   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:45.978670   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:45.978715   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:46.050765   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:46.050795   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:46.050811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:46.145347   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:46.145387   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:43.132518   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:45.133735   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:43.399717   65177 pod_ready.go:81] duration metric: took 4m0.000898156s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	E0723 15:24:43.399747   65177 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0723 15:24:43.399766   65177 pod_ready.go:38] duration metric: took 4m8.000231971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:24:43.399796   65177 kubeadm.go:597] duration metric: took 4m15.901150134s to restartPrimaryControlPlane
	W0723 15:24:43.399891   65177 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:43.399930   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:46.154147   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.653381   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.691420   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:48.704605   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:48.704662   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:48.736998   65605 cri.go:89] found id: ""
	I0723 15:24:48.737030   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.737040   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:48.737048   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:48.737116   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:48.770428   65605 cri.go:89] found id: ""
	I0723 15:24:48.770456   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.770466   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:48.770474   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:48.770534   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:48.804036   65605 cri.go:89] found id: ""
	I0723 15:24:48.804063   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.804073   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:48.804080   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:48.804140   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:48.841221   65605 cri.go:89] found id: ""
	I0723 15:24:48.841247   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.841256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:48.841263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:48.841345   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:48.877239   65605 cri.go:89] found id: ""
	I0723 15:24:48.877269   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.877280   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:48.877288   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:48.877348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:48.910120   65605 cri.go:89] found id: ""
	I0723 15:24:48.910144   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.910153   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:48.910161   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:48.910222   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:48.944831   65605 cri.go:89] found id: ""
	I0723 15:24:48.944861   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.944872   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:48.944881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:48.944936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:48.978782   65605 cri.go:89] found id: ""
	I0723 15:24:48.978811   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.978821   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:48.978832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:48.978850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:49.031863   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:49.031900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:49.045173   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:49.045196   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:49.115607   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:49.115632   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:49.115644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:49.195137   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:49.195186   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:51.732915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:51.746885   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:51.746970   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:51.787857   65605 cri.go:89] found id: ""
	I0723 15:24:51.787878   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.787885   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:51.787890   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:51.787933   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:51.826515   65605 cri.go:89] found id: ""
	I0723 15:24:51.826537   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.826545   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:51.826550   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:51.826611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:47.634980   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:50.132905   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.153224   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:53.153400   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.863825   65605 cri.go:89] found id: ""
	I0723 15:24:51.863867   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.863878   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:51.863884   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:51.863936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:51.901367   65605 cri.go:89] found id: ""
	I0723 15:24:51.901403   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.901414   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:51.901422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:51.901474   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:51.933270   65605 cri.go:89] found id: ""
	I0723 15:24:51.933303   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.933314   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:51.933321   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:51.933385   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:51.965174   65605 cri.go:89] found id: ""
	I0723 15:24:51.965205   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.965217   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:51.965227   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:51.965296   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:51.999785   65605 cri.go:89] found id: ""
	I0723 15:24:51.999812   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.999822   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:51.999841   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:51.999914   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:52.035592   65605 cri.go:89] found id: ""
	I0723 15:24:52.035619   65605 logs.go:276] 0 containers: []
	W0723 15:24:52.035630   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:52.035641   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:52.035656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:52.048683   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:52.048711   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:52.112319   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:52.112338   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:52.112351   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:52.196596   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:52.196632   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:52.235608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:52.235635   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:54.786414   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:54.799864   65605 kubeadm.go:597] duration metric: took 4m4.703331486s to restartPrimaryControlPlane
	W0723 15:24:54.799946   65605 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:54.799996   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:52.134857   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:54.633070   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:55.653385   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.154569   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.675405   65605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.875388525s)
	I0723 15:24:58.675461   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:24:58.689878   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:24:58.699568   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:24:58.708541   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:24:58.708559   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:24:58.708604   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:24:58.717055   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:24:58.717108   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:24:58.725736   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:24:58.734127   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:24:58.734227   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:24:58.742862   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.750696   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:24:58.750747   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.759235   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:24:58.768036   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:24:58.768094   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:24:58.777299   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:24:58.976177   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:24:57.133412   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:59.633162   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:00.652486   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.653128   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.654556   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.132762   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.134714   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:06.632391   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:07.152861   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:09.153443   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:08.633329   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.133963   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.652964   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:13.653225   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:14.921745   65177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.521789017s)
	I0723 15:25:14.921814   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:14.937627   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:25:14.948238   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:25:14.958145   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:25:14.958171   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:25:14.958223   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:25:14.967224   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:25:14.967282   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:25:14.975995   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:25:14.984981   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:25:14.985040   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:25:14.993733   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.002214   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:25:15.002265   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.012952   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:25:15.022716   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:25:15.022775   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:25:15.032954   65177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:25:15.081347   65177 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 15:25:15.081412   65177 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:25:15.217189   65177 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:25:15.217316   65177 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:25:15.217421   65177 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:25:15.414012   65177 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:25:15.415975   65177 out.go:204]   - Generating certificates and keys ...
	I0723 15:25:15.416086   65177 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:25:15.416172   65177 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:25:15.416284   65177 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:25:15.416378   65177 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:25:15.416512   65177 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:25:15.416600   65177 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:25:15.416690   65177 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:25:15.416781   65177 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:25:15.416901   65177 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:25:15.417027   65177 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:25:15.417091   65177 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:25:15.417169   65177 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:25:15.577526   65177 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:25:15.771865   65177 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 15:25:15.968841   65177 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:25:16.376626   65177 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:25:16.569425   65177 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:25:16.570004   65177 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:25:16.572623   65177 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:25:13.633779   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.133051   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.574399   65177 out.go:204]   - Booting up control plane ...
	I0723 15:25:16.574516   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:25:16.574622   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:25:16.575046   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:25:16.594177   65177 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:25:16.595205   65177 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:25:16.595310   65177 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:25:16.739893   65177 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 15:25:16.740022   65177 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 15:25:17.242030   65177 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.858581ms
	I0723 15:25:17.242119   65177 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 15:25:15.653757   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.153924   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:20.154226   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.634047   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:21.132773   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:22.244539   65177 kubeadm.go:310] [api-check] The API server is healthy after 5.002291296s
	I0723 15:25:22.260367   65177 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 15:25:22.272659   65177 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 15:25:22.304686   65177 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 15:25:22.304939   65177 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-486436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 15:25:22.318299   65177 kubeadm.go:310] [bootstrap-token] Using token: 1476j9.4ihrwdjbg4aq5odf
	I0723 15:25:22.319736   65177 out.go:204]   - Configuring RBAC rules ...
	I0723 15:25:22.319899   65177 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 15:25:22.329081   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 15:25:22.340687   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 15:25:22.344962   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 15:25:22.348526   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 15:25:22.355955   65177 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 15:25:22.652467   65177 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 15:25:23.122105   65177 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 15:25:23.653074   65177 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 15:25:23.654335   65177 kubeadm.go:310] 
	I0723 15:25:23.654448   65177 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 15:25:23.654461   65177 kubeadm.go:310] 
	I0723 15:25:23.654580   65177 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 15:25:23.654599   65177 kubeadm.go:310] 
	I0723 15:25:23.654648   65177 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 15:25:23.654721   65177 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 15:25:23.654796   65177 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 15:25:23.654821   65177 kubeadm.go:310] 
	I0723 15:25:23.654902   65177 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 15:25:23.654925   65177 kubeadm.go:310] 
	I0723 15:25:23.655000   65177 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 15:25:23.655010   65177 kubeadm.go:310] 
	I0723 15:25:23.655076   65177 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 15:25:23.655174   65177 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 15:25:23.655256   65177 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 15:25:23.655264   65177 kubeadm.go:310] 
	I0723 15:25:23.655352   65177 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 15:25:23.655440   65177 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 15:25:23.655459   65177 kubeadm.go:310] 
	I0723 15:25:23.655579   65177 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.655719   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 15:25:23.655752   65177 kubeadm.go:310] 	--control-plane 
	I0723 15:25:23.655771   65177 kubeadm.go:310] 
	I0723 15:25:23.655896   65177 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 15:25:23.655904   65177 kubeadm.go:310] 
	I0723 15:25:23.656005   65177 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.656141   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 15:25:23.656644   65177 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:25:23.656674   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:25:23.656686   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:25:23.659688   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:25:22.653874   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:24.654172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.133652   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:25.633189   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.660997   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:25:23.671788   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:25:23.692109   65177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:25:23.692195   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:23.692199   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-486436 minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=embed-certs-486436 minikube.k8s.io/primary=true
	I0723 15:25:23.716101   65177 ops.go:34] apiserver oom_adj: -16
	I0723 15:25:23.905952   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.405980   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.906787   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.406096   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.906365   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.406501   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.906068   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.406018   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.907033   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.153085   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.653377   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:27.633816   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.133531   66641 pod_ready.go:81] duration metric: took 4m0.007080073s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:29.133554   66641 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:29.133561   66641 pod_ready.go:38] duration metric: took 4m4.545428088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:29.133577   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:29.133601   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:29.133646   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:29.179796   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:29.179818   66641 cri.go:89] found id: ""
	I0723 15:25:29.179830   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:29.179882   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.184024   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:29.184095   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:29.219711   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:29.219740   66641 cri.go:89] found id: ""
	I0723 15:25:29.219749   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:29.219814   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.223687   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:29.223761   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:29.258473   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:29.258498   66641 cri.go:89] found id: ""
	I0723 15:25:29.258508   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:29.258556   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.262789   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:29.262857   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:29.304206   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.304233   66641 cri.go:89] found id: ""
	I0723 15:25:29.304242   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:29.304306   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.309658   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:29.309735   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:29.361664   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:29.361690   66641 cri.go:89] found id: ""
	I0723 15:25:29.361699   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:29.361758   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.366171   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:29.366248   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:29.414069   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:29.414094   66641 cri.go:89] found id: ""
	I0723 15:25:29.414104   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:29.414162   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.419607   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:29.419678   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:29.464533   66641 cri.go:89] found id: ""
	I0723 15:25:29.464563   66641 logs.go:276] 0 containers: []
	W0723 15:25:29.464573   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:29.464580   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:29.464640   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:29.499966   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:29.499991   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:29.499996   66641 cri.go:89] found id: ""
	I0723 15:25:29.500006   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:29.500063   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.503961   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.508088   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:29.508109   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:29.653373   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:29.653403   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.694171   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:29.694205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:30.262503   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:30.262559   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:30.304038   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:30.304070   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:30.357964   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:30.358013   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:30.372263   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:30.372296   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:30.418543   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:30.418583   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:30.470018   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:30.470050   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:30.503538   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:30.503579   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:30.538515   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:30.538554   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:30.599104   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:30.599137   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:30.635841   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:30.635867   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:28.406535   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:28.906729   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.406804   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.906364   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.406245   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.906646   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.406143   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.906645   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.406411   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.906643   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.653490   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.654773   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.406893   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:33.906016   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.406827   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.906668   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.406337   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.906162   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.406864   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.502155   65177 kubeadm.go:1113] duration metric: took 12.810025657s to wait for elevateKubeSystemPrivileges
	I0723 15:25:36.502200   65177 kubeadm.go:394] duration metric: took 5m9.050239878s to StartCluster
	I0723 15:25:36.502225   65177 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.502332   65177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:25:36.504959   65177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.505284   65177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:25:36.505373   65177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:25:36.505452   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:25:36.505461   65177 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-486436"
	I0723 15:25:36.505486   65177 addons.go:69] Setting metrics-server=true in profile "embed-certs-486436"
	I0723 15:25:36.505494   65177 addons.go:69] Setting default-storageclass=true in profile "embed-certs-486436"
	I0723 15:25:36.505509   65177 addons.go:234] Setting addon metrics-server=true in "embed-certs-486436"
	W0723 15:25:36.505518   65177 addons.go:243] addon metrics-server should already be in state true
	I0723 15:25:36.505535   65177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-486436"
	I0723 15:25:36.505541   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505487   65177 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-486436"
	W0723 15:25:36.505635   65177 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:25:36.505652   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505919   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505938   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505950   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505959   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.505987   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.506050   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.507034   65177 out.go:177] * Verifying Kubernetes components...
	I0723 15:25:36.508493   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:25:36.521500   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0723 15:25:36.521508   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0723 15:25:36.521836   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0723 15:25:36.522060   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522168   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522198   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522626   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522674   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522696   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522710   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522713   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522724   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.523009   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523043   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523309   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523454   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.523518   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523542   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.523629   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523665   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.527348   65177 addons.go:234] Setting addon default-storageclass=true in "embed-certs-486436"
	W0723 15:25:36.527370   65177 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:25:36.527399   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.527752   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.527784   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.540037   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0723 15:25:36.540208   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0723 15:25:36.540572   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.540689   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.541105   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541113   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541122   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541123   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541455   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541454   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541618   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.541686   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.543525   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.543999   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.545455   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0723 15:25:36.545800   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.545846   65177 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:25:36.545906   65177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:25:33.172857   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:33.188951   66641 api_server.go:72] duration metric: took 4m16.32591009s to wait for apiserver process to appear ...
	I0723 15:25:33.188979   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:33.189022   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:33.189077   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:33.228175   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.228204   66641 cri.go:89] found id: ""
	I0723 15:25:33.228213   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:33.228271   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.232451   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:33.232518   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:33.268343   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.268362   66641 cri.go:89] found id: ""
	I0723 15:25:33.268371   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:33.268426   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.272333   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:33.272388   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:33.305913   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.305936   66641 cri.go:89] found id: ""
	I0723 15:25:33.305945   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:33.305998   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.310500   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:33.310573   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:33.345773   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.345798   66641 cri.go:89] found id: ""
	I0723 15:25:33.345807   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:33.345872   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.350031   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:33.350084   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:33.383305   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.383331   66641 cri.go:89] found id: ""
	I0723 15:25:33.383341   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:33.383399   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.387279   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:33.387331   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:33.428442   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.428468   66641 cri.go:89] found id: ""
	I0723 15:25:33.428478   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:33.428676   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.432814   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:33.432879   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:33.469064   66641 cri.go:89] found id: ""
	I0723 15:25:33.469093   66641 logs.go:276] 0 containers: []
	W0723 15:25:33.469105   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:33.469112   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:33.469164   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:33.509131   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:33.509161   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.509168   66641 cri.go:89] found id: ""
	I0723 15:25:33.509177   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:33.509240   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.513478   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.517125   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:33.517152   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.554974   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:33.555004   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.606042   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:33.606074   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.648068   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:33.648100   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:33.698660   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:33.698690   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:33.797480   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:33.797508   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:33.812119   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:33.812146   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.863628   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:33.863661   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.913667   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:33.913695   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.949115   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:33.949144   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.988180   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:33.988205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:34.023679   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:34.023705   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:34.481829   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:34.481886   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:36.546218   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.546238   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.546607   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.547165   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.547209   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.547534   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:25:36.547548   65177 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:25:36.547565   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.547735   65177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.547752   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:25:36.547771   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.551130   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551764   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551767   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.551800   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551844   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551871   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.552160   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.552187   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552429   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552608   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552606   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.552797   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.567445   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0723 15:25:36.567912   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.568411   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.568432   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.568752   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.568949   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.570216   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.570524   65177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.570580   65177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:25:36.570620   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.572949   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573375   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.573402   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573509   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.573658   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.573787   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.573918   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.722640   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:25:36.756372   65177 node_ready.go:35] waiting up to 6m0s for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.779995   65177 node_ready.go:49] node "embed-certs-486436" has status "Ready":"True"
	I0723 15:25:36.780025   65177 node_ready.go:38] duration metric: took 23.62289ms for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.780039   65177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:36.807738   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.810749   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:36.820589   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:25:36.820613   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:25:36.880548   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:25:36.880581   65177 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:25:36.961807   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.962203   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:36.962229   65177 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:25:37.055123   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:37.148724   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.148749   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149038   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149096   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.149114   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.149123   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149412   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149432   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161152   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.161173   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.161477   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.161496   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161496   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.119897   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158050831s)
	I0723 15:25:38.120002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120022   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120358   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.120383   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.120399   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120361   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122012   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122234   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.122252   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.401938   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.346767402s)
	I0723 15:25:38.402002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402019   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402366   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402391   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402401   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402409   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402725   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.402738   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402762   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402773   65177 addons.go:475] Verifying addon metrics-server=true in "embed-certs-486436"
	I0723 15:25:38.404515   65177 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0723 15:25:36.154127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.155104   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.405850   65177 addons.go:510] duration metric: took 1.90047622s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0723 15:25:38.816969   65177 pod_ready.go:102] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:39.316609   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.316632   65177 pod_ready.go:81] duration metric: took 2.505858486s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.316642   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327865   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.327890   65177 pod_ready.go:81] duration metric: took 11.242778ms for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327900   65177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332886   65177 pod_ready.go:92] pod "etcd-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.332914   65177 pod_ready.go:81] duration metric: took 5.006846ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332925   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337166   65177 pod_ready.go:92] pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.337183   65177 pod_ready.go:81] duration metric: took 4.252609ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337198   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341748   65177 pod_ready.go:92] pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.341762   65177 pod_ready.go:81] duration metric: took 4.559215ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341771   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714214   65177 pod_ready.go:92] pod "kube-proxy-wzh4d" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.714237   65177 pod_ready.go:81] duration metric: took 372.459367ms for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714247   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114721   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:40.114744   65177 pod_ready.go:81] duration metric: took 400.490439ms for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114752   65177 pod_ready.go:38] duration metric: took 3.334700958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:40.114765   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:40.114821   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:40.130577   65177 api_server.go:72] duration metric: took 3.625254211s to wait for apiserver process to appear ...
	I0723 15:25:40.130607   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:40.130624   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:25:40.134690   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:25:40.135639   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:40.135658   65177 api_server.go:131] duration metric: took 5.04581ms to wait for apiserver health ...
	I0723 15:25:40.135665   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:40.318436   65177 system_pods.go:59] 9 kube-system pods found
	I0723 15:25:40.318466   65177 system_pods.go:61] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.318471   65177 system_pods.go:61] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.318474   65177 system_pods.go:61] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.318478   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.318481   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.318484   65177 system_pods.go:61] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.318487   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.318492   65177 system_pods.go:61] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.318497   65177 system_pods.go:61] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.318506   65177 system_pods.go:74] duration metric: took 182.836785ms to wait for pod list to return data ...
	I0723 15:25:40.318514   65177 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.514737   65177 default_sa.go:45] found service account: "default"
	I0723 15:25:40.514768   65177 default_sa.go:55] duration metric: took 196.245408ms for default service account to be created ...
	I0723 15:25:40.514779   65177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.718646   65177 system_pods.go:86] 9 kube-system pods found
	I0723 15:25:40.718675   65177 system_pods.go:89] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.718684   65177 system_pods.go:89] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.718690   65177 system_pods.go:89] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.718696   65177 system_pods.go:89] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.718702   65177 system_pods.go:89] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.718707   65177 system_pods.go:89] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.718713   65177 system_pods.go:89] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.718721   65177 system_pods.go:89] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.718728   65177 system_pods.go:89] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.718743   65177 system_pods.go:126] duration metric: took 203.95636ms to wait for k8s-apps to be running ...
	I0723 15:25:40.718756   65177 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.718809   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.733038   65177 system_svc.go:56] duration metric: took 14.275362ms WaitForService to wait for kubelet
	I0723 15:25:40.733069   65177 kubeadm.go:582] duration metric: took 4.227749087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.733088   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.914859   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.914886   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.914898   65177 node_conditions.go:105] duration metric: took 181.804872ms to run NodePressure ...
	I0723 15:25:40.914909   65177 start.go:241] waiting for startup goroutines ...
	I0723 15:25:40.914918   65177 start.go:246] waiting for cluster config update ...
	I0723 15:25:40.914932   65177 start.go:255] writing updated cluster config ...
	I0723 15:25:40.915235   65177 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:40.963735   65177 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:40.966048   65177 out.go:177] * Done! kubectl is now configured to use "embed-certs-486436" cluster and "default" namespace by default
	I0723 15:25:37.033161   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:25:37.039656   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:25:37.040745   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:37.040768   66641 api_server.go:131] duration metric: took 3.851781875s to wait for apiserver health ...
	I0723 15:25:37.040781   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:37.040807   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:37.040868   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:37.090495   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:37.090524   66641 cri.go:89] found id: ""
	I0723 15:25:37.090533   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:37.090608   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.094934   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:37.095005   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:37.138911   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.138937   66641 cri.go:89] found id: ""
	I0723 15:25:37.138947   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:37.139006   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.143876   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:37.143937   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:37.187419   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.187446   66641 cri.go:89] found id: ""
	I0723 15:25:37.187455   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:37.187514   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.191818   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:37.191896   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:37.232332   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:37.232358   66641 cri.go:89] found id: ""
	I0723 15:25:37.232366   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:37.232414   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.236718   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:37.236795   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:37.273231   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.273259   66641 cri.go:89] found id: ""
	I0723 15:25:37.273269   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:37.273339   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.279499   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:37.279575   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:37.316848   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.316867   66641 cri.go:89] found id: ""
	I0723 15:25:37.316875   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:37.316931   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.321920   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:37.321991   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:37.361804   66641 cri.go:89] found id: ""
	I0723 15:25:37.361833   66641 logs.go:276] 0 containers: []
	W0723 15:25:37.361844   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:37.361850   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:37.361909   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:37.401687   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:37.401715   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:37.401720   66641 cri.go:89] found id: ""
	I0723 15:25:37.401729   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:37.401788   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.406444   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.410788   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:37.410812   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:37.427033   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:37.427063   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:37.567851   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:37.567884   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.633966   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:37.634003   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.679663   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:37.679701   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.715046   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:37.715084   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.779870   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:37.779917   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:38.166491   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:38.166527   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:38.222592   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:38.222625   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:38.282823   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:38.282864   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:38.320076   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:38.320114   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:38.361845   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:38.361873   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:38.404791   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:38.404818   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:40.969345   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:25:40.969373   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.969378   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.969384   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.969388   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.969392   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.969395   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.969403   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.969407   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.969419   66641 system_pods.go:74] duration metric: took 3.928631967s to wait for pod list to return data ...
	I0723 15:25:40.969430   66641 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.971647   66641 default_sa.go:45] found service account: "default"
	I0723 15:25:40.971668   66641 default_sa.go:55] duration metric: took 2.232202ms for default service account to be created ...
	I0723 15:25:40.971675   66641 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.976760   66641 system_pods.go:86] 8 kube-system pods found
	I0723 15:25:40.976782   66641 system_pods.go:89] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.976787   66641 system_pods.go:89] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.976793   66641 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.976798   66641 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.976805   66641 system_pods.go:89] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.976809   66641 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.976818   66641 system_pods.go:89] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.976825   66641 system_pods.go:89] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.976832   66641 system_pods.go:126] duration metric: took 5.152102ms to wait for k8s-apps to be running ...
	I0723 15:25:40.976838   66641 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.976875   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.996951   66641 system_svc.go:56] duration metric: took 20.10286ms WaitForService to wait for kubelet
	I0723 15:25:40.996983   66641 kubeadm.go:582] duration metric: took 4m24.133944078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.997007   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.999958   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.999980   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.999991   66641 node_conditions.go:105] duration metric: took 2.97868ms to run NodePressure ...
	I0723 15:25:41.000002   66641 start.go:241] waiting for startup goroutines ...
	I0723 15:25:41.000008   66641 start.go:246] waiting for cluster config update ...
	I0723 15:25:41.000017   66641 start.go:255] writing updated cluster config ...
	I0723 15:25:41.000292   66641 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:41.058447   66641 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:41.060584   66641 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-911217" cluster and "default" namespace by default
	I0723 15:25:40.652692   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:42.653402   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:44.653499   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:47.153167   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:49.652723   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:51.653106   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:54.152382   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.153666   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.652308   64842 pod_ready.go:81] duration metric: took 4m0.005573507s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:56.652340   64842 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:56.652348   64842 pod_ready.go:38] duration metric: took 4m3.607231702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:56.652364   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:56.652389   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:56.652432   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:56.709002   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:56.709024   64842 cri.go:89] found id: ""
	I0723 15:25:56.709031   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:25:56.709076   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.713436   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:56.713496   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:56.748180   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:56.748203   64842 cri.go:89] found id: ""
	I0723 15:25:56.748212   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:25:56.748267   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.753878   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:56.753950   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:56.790420   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:56.790443   64842 cri.go:89] found id: ""
	I0723 15:25:56.790450   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:25:56.790503   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.794360   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:56.794430   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:56.833056   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:56.833084   64842 cri.go:89] found id: ""
	I0723 15:25:56.833093   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:25:56.833158   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.838040   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:56.838097   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:56.877548   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:56.877569   64842 cri.go:89] found id: ""
	I0723 15:25:56.877576   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:25:56.877620   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.881682   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:56.881754   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:56.931794   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:56.931821   64842 cri.go:89] found id: ""
	I0723 15:25:56.931831   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:25:56.931903   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.936454   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:56.936529   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:56.974347   64842 cri.go:89] found id: ""
	I0723 15:25:56.974373   64842 logs.go:276] 0 containers: []
	W0723 15:25:56.974401   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:56.974411   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:56.974595   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:57.008960   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:57.008986   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:25:57.008990   64842 cri.go:89] found id: ""
	I0723 15:25:57.008997   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:25:57.009044   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.013403   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.017022   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:57.017041   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:57.031010   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:57.031038   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:57.162515   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:25:57.162548   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:57.202805   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:25:57.202840   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:57.238593   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:57.238622   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:57.740811   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:25:57.740854   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:57.786125   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:57.786154   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:57.839346   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:25:57.839389   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:57.885507   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:25:57.885545   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:57.923025   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:25:57.923058   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:57.961082   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:25:57.961112   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:58.013561   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:25:58.013602   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:58.051695   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:25:58.051733   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.585802   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:26:00.601135   64842 api_server.go:72] duration metric: took 4m14.792155211s to wait for apiserver process to appear ...
	I0723 15:26:00.601167   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:26:00.601210   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:00.601269   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:00.641653   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:00.641678   64842 cri.go:89] found id: ""
	I0723 15:26:00.641687   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:00.641751   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.645831   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:00.645886   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:00.684737   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:00.684763   64842 cri.go:89] found id: ""
	I0723 15:26:00.684773   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:00.684836   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.689094   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:00.689140   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:00.725761   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:00.725787   64842 cri.go:89] found id: ""
	I0723 15:26:00.725795   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:00.725838   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.729843   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:00.729928   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:00.769870   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:00.769890   64842 cri.go:89] found id: ""
	I0723 15:26:00.769897   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:00.769942   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.774178   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:00.774235   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:00.816236   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:00.816261   64842 cri.go:89] found id: ""
	I0723 15:26:00.816268   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:00.816315   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.820577   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:00.820632   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:00.866824   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:00.866849   64842 cri.go:89] found id: ""
	I0723 15:26:00.866857   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:00.866910   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.871035   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:00.871089   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:00.913991   64842 cri.go:89] found id: ""
	I0723 15:26:00.914020   64842 logs.go:276] 0 containers: []
	W0723 15:26:00.914029   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:00.914035   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:00.914091   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:00.954766   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:00.954789   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.954795   64842 cri.go:89] found id: ""
	I0723 15:26:00.954804   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:00.954855   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.959067   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.962784   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:00.962807   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.998749   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:00.998781   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:01.454863   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:01.454902   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:01.505800   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:01.505829   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:01.555977   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:01.556008   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:01.591914   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:01.591942   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:01.649054   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:01.649083   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:01.682090   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:01.682116   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:01.721805   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:01.721832   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:01.758403   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:01.758432   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:01.808766   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:01.808803   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:01.823556   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:01.823589   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:01.936323   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:01.936355   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.478126   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:26:04.483667   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:26:04.484710   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:26:04.484730   64842 api_server.go:131] duration metric: took 3.883557615s to wait for apiserver health ...
	I0723 15:26:04.484737   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:26:04.484759   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:04.484810   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:04.522732   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:04.522757   64842 cri.go:89] found id: ""
	I0723 15:26:04.522766   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:04.522825   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.526922   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:04.526986   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:04.572736   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.572761   64842 cri.go:89] found id: ""
	I0723 15:26:04.572770   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:04.572828   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.576911   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:04.576966   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:04.612283   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.612310   64842 cri.go:89] found id: ""
	I0723 15:26:04.612318   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:04.612367   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.616609   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:04.616660   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:04.653775   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:04.653800   64842 cri.go:89] found id: ""
	I0723 15:26:04.653808   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:04.653883   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.658242   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:04.658298   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:04.699132   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.699155   64842 cri.go:89] found id: ""
	I0723 15:26:04.699164   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:04.699225   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.703672   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:04.703735   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:04.740522   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:04.740541   64842 cri.go:89] found id: ""
	I0723 15:26:04.740548   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:04.740605   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.745065   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:04.745134   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:04.779209   64842 cri.go:89] found id: ""
	I0723 15:26:04.779234   64842 logs.go:276] 0 containers: []
	W0723 15:26:04.779242   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:04.779255   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:04.779321   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:04.816696   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.816713   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:04.816718   64842 cri.go:89] found id: ""
	I0723 15:26:04.816728   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:04.816777   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.820775   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.824335   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:04.824362   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.865073   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:04.865105   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.903588   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:04.903617   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.939994   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:04.940022   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.976373   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:04.976402   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:05.355834   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:05.355877   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:05.410198   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:05.410228   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:05.424358   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:05.424391   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:05.464494   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:05.464526   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:05.496709   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:05.496736   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:05.534919   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:05.534959   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:05.640875   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:05.640913   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:05.678050   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:05.678078   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:08.236070   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:26:08.236336   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.236346   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.236351   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.236354   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.236357   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.236360   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.236368   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.236376   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.236382   64842 system_pods.go:74] duration metric: took 3.751640289s to wait for pod list to return data ...
	I0723 15:26:08.236391   64842 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:26:08.239339   64842 default_sa.go:45] found service account: "default"
	I0723 15:26:08.239367   64842 default_sa.go:55] duration metric: took 2.96931ms for default service account to be created ...
	I0723 15:26:08.239378   64842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:26:08.244406   64842 system_pods.go:86] 8 kube-system pods found
	I0723 15:26:08.244432   64842 system_pods.go:89] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.244438   64842 system_pods.go:89] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.244442   64842 system_pods.go:89] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.244447   64842 system_pods.go:89] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.244451   64842 system_pods.go:89] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.244455   64842 system_pods.go:89] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.244462   64842 system_pods.go:89] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.244468   64842 system_pods.go:89] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.244474   64842 system_pods.go:126] duration metric: took 5.091237ms to wait for k8s-apps to be running ...
	I0723 15:26:08.244481   64842 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:26:08.244521   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:08.260574   64842 system_svc.go:56] duration metric: took 16.083672ms WaitForService to wait for kubelet
	I0723 15:26:08.260610   64842 kubeadm.go:582] duration metric: took 4m22.451635049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:26:08.260634   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:26:08.263927   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:26:08.263954   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:26:08.263966   64842 node_conditions.go:105] duration metric: took 3.324706ms to run NodePressure ...
	I0723 15:26:08.263977   64842 start.go:241] waiting for startup goroutines ...
	I0723 15:26:08.263983   64842 start.go:246] waiting for cluster config update ...
	I0723 15:26:08.263992   64842 start.go:255] writing updated cluster config ...
	I0723 15:26:08.264250   64842 ssh_runner.go:195] Run: rm -f paused
	I0723 15:26:08.312776   64842 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0723 15:26:08.315009   64842 out.go:177] * Done! kubectl is now configured to use "no-preload-543029" cluster and "default" namespace by default
	I0723 15:26:54.925074   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:26:54.925180   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:26:54.926872   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:54.926940   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:54.927022   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:54.927137   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:54.927252   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:54.927339   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:54.929261   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:54.929337   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:54.929399   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:54.929472   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:54.929580   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:54.929678   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:54.929758   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:54.929836   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:54.929924   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:54.930026   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:54.930118   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:54.930165   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:54.930210   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:54.930257   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:54.930300   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:54.930371   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:54.930438   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:54.930535   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:54.930631   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:54.930663   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:54.930752   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:54.932218   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:54.932344   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:54.932445   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:54.932537   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:54.932653   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:54.932869   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:26:54.932943   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:26:54.933025   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933337   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933600   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933701   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933890   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933995   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934331   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934535   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934546   65605 kubeadm.go:310] 
	I0723 15:26:54.934600   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:26:54.934663   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:26:54.934673   65605 kubeadm.go:310] 
	I0723 15:26:54.934723   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:26:54.934762   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:26:54.934848   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:26:54.934855   65605 kubeadm.go:310] 
	I0723 15:26:54.934948   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:26:54.934979   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:26:54.935026   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:26:54.935034   65605 kubeadm.go:310] 
	I0723 15:26:54.935136   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:26:54.935255   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:26:54.935265   65605 kubeadm.go:310] 
	I0723 15:26:54.935410   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:26:54.935519   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:26:54.935578   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:26:54.935637   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:26:54.935693   65605 kubeadm.go:310] 
	W0723 15:26:54.935756   65605 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:26:54.935811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:26:55.388601   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:55.402519   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:26:55.412031   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:26:55.412054   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:26:55.412097   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:26:55.423092   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:26:55.423146   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:26:55.432321   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:26:55.441379   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:26:55.441447   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:26:55.450733   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.459263   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:26:55.459333   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.468488   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:26:55.477223   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:26:55.477277   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:26:55.485924   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:26:55.555024   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:55.555097   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:55.695658   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:55.695814   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:55.695939   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:55.867103   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:55.870203   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:55.870299   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:55.870407   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:55.870490   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:55.870568   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:55.870655   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:55.870733   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:55.870813   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:55.870861   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:55.870920   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:55.870985   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:55.871016   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:55.871063   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:55.963452   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:56.554450   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:57.109698   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:57.223533   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:57.243368   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:57.244331   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:57.244378   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:57.375340   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:57.377119   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:57.377234   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:57.386697   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:57.388552   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:57.389505   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:57.391792   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:27:37.394425   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:27:37.394534   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:37.394766   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:42.395393   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:42.395663   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:52.395847   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:52.396071   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:12.396192   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:12.396413   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395047   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:52.395369   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395384   65605 kubeadm.go:310] 
	I0723 15:28:52.395457   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:28:52.395531   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:28:52.395542   65605 kubeadm.go:310] 
	I0723 15:28:52.395588   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:28:52.395619   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:28:52.395780   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:28:52.395809   65605 kubeadm.go:310] 
	I0723 15:28:52.395964   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:28:52.396028   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:28:52.396084   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:28:52.396095   65605 kubeadm.go:310] 
	I0723 15:28:52.396194   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:28:52.396276   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:28:52.396286   65605 kubeadm.go:310] 
	I0723 15:28:52.396449   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:28:52.396552   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:28:52.396649   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:28:52.396744   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:28:52.396752   65605 kubeadm.go:310] 
	I0723 15:28:52.397220   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:28:52.397322   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:28:52.397397   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:28:52.397473   65605 kubeadm.go:394] duration metric: took 8m2.354906945s to StartCluster
	I0723 15:28:52.397516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:28:52.397573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:28:52.442298   65605 cri.go:89] found id: ""
	I0723 15:28:52.442328   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.442339   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:28:52.442347   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:28:52.442422   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:28:52.476108   65605 cri.go:89] found id: ""
	I0723 15:28:52.476131   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.476138   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:28:52.476144   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:28:52.476205   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:28:52.511118   65605 cri.go:89] found id: ""
	I0723 15:28:52.511143   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.511152   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:28:52.511159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:28:52.511224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:28:52.544901   65605 cri.go:89] found id: ""
	I0723 15:28:52.544934   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.544946   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:28:52.544954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:28:52.545020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:28:52.580472   65605 cri.go:89] found id: ""
	I0723 15:28:52.580494   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.580501   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:28:52.580515   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:28:52.580577   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:28:52.613777   65605 cri.go:89] found id: ""
	I0723 15:28:52.613808   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.613818   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:28:52.613826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:28:52.613894   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:28:52.650831   65605 cri.go:89] found id: ""
	I0723 15:28:52.650961   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.650974   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:28:52.650982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:28:52.651048   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:28:52.684805   65605 cri.go:89] found id: ""
	I0723 15:28:52.684833   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.684845   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:28:52.684857   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:28:52.684873   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:28:52.787532   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:28:52.787583   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:28:52.843947   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:28:52.843979   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:28:52.894679   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:28:52.894714   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:28:52.910794   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:28:52.910821   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:28:52.989285   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0723 15:28:52.989325   65605 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:28:52.989368   65605 out.go:239] * 
	W0723 15:28:52.989432   65605 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.989465   65605 out.go:239] * 
	W0723 15:28:52.990350   65605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:28:52.993770   65605 out.go:177] 
	W0723 15:28:52.995023   65605 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.995076   65605 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:28:52.995095   65605 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:28:52.996528   65605 out.go:177] 
	
	
	==> CRI-O <==
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.352025218Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748883352004456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1d681d6-b900-4220-a7e5-66d25683ac82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.352632610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7f98cab-713f-4136-95f3-e4cce9366a3d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.352705138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7f98cab-713f-4136-95f3-e4cce9366a3d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.352907696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7f98cab-713f-4136-95f3-e4cce9366a3d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.396621053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ed27a9a-2548-461d-b6bc-18d7cd0142cc name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.396710149Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ed27a9a-2548-461d-b6bc-18d7cd0142cc name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.397759927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=018a2997-3463-4192-a034-b7826c0c3150 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.398178943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748883398148937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=018a2997-3463-4192-a034-b7826c0c3150 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.398757044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b958bc5e-3057-4d19-8a66-f704d0744ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.398829030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b958bc5e-3057-4d19-8a66-f704d0744ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.399016994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b958bc5e-3057-4d19-8a66-f704d0744ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.440522042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8f28f83-6060-42d3-bc6d-ffa2e56a8462 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.440608271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8f28f83-6060-42d3-bc6d-ffa2e56a8462 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.441750609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d741e451-c327-4675-846b-703e3de2290a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.442151808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748883442131126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d741e451-c327-4675-846b-703e3de2290a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.442647604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f334524d-a4bc-4284-858d-3cec0532d003 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.442721292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f334524d-a4bc-4284-858d-3cec0532d003 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.442961512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f334524d-a4bc-4284-858d-3cec0532d003 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.476429988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa9b659e-3e4a-46b9-9eda-ec1f1efa8be7 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.476521858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa9b659e-3e4a-46b9-9eda-ec1f1efa8be7 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.477615108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df341b56-6851-4361-9a96-ab758b318b35 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.478145564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748883478122865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df341b56-6851-4361-9a96-ab758b318b35 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.478707329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=401755ec-4ff2-46ca-be66-9924d8af9596 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.478859003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=401755ec-4ff2-46ca-be66-9924d8af9596 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 embed-certs-486436 crio[727]: time="2024-07-23 15:34:43.479077628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=401755ec-4ff2-46ca-be66-9924d8af9596 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2df1371fcdf71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   870b02d3c5612       storage-provisioner
	6875dba4151da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   64a2208f90f3e       coredns-7db6d8ff4d-hnlc7
	58c6510eb089a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0ac3f1de36b46       coredns-7db6d8ff4d-lj5xg
	f1f141eaef2ba       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   fdd86191b356f       kube-proxy-wzh4d
	ff5f662fc4f45       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   27204d27f928e       etcd-embed-certs-486436
	d96dc2ceb2625       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   9bfe35d814868       kube-scheduler-embed-certs-486436
	57cec121ce037       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   9a913748a4b9f       kube-apiserver-embed-certs-486436
	b7c481c754ef1       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   c8cf85132d12f       kube-controller-manager-embed-certs-486436
	
	
	==> coredns [58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-486436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-486436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=embed-certs-486436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:25:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-486436
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:34:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:30:49 +0000   Tue, 23 Jul 2024 15:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:30:49 +0000   Tue, 23 Jul 2024 15:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:30:49 +0000   Tue, 23 Jul 2024 15:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:30:49 +0000   Tue, 23 Jul 2024 15:25:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    embed-certs-486436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14762c4ab825492d956123b475a79cfa
	  System UUID:                14762c4a-b825-492d-9561-23b475a79cfa
	  Boot ID:                    670dbae9-a5f4-4314-956d-1b105e1f2510
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hnlc7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-lj5xg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-486436                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-486436             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-486436    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-wzh4d                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-486436             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-7l2jw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node embed-certs-486436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node embed-certs-486436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node embed-certs-486436 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-486436 event: Registered Node embed-certs-486436 in Controller
	
	
	==> dmesg <==
	[  +0.050392] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036089] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.692943] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.901446] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.511879] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.838121] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.055462] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064265] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.169480] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.146275] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.297435] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.147752] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +1.899642] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.060656] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.538626] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.584489] kauditd_printk_skb: 79 callbacks suppressed
	[Jul23 15:25] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.733194] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	[  +4.438462] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.635351] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	[ +13.890636] systemd-fstab-generator[4098]: Ignoring "noauto" option for root device
	[  +0.099490] kauditd_printk_skb: 14 callbacks suppressed
	[Jul23 15:26] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d] <==
	{"level":"info","ts":"2024-07-23T15:25:18.091711Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","added-peer-id":"fe8c4457455e3a5","added-peer-peer-urls":["https://192.168.39.200:2380"]}
	{"level":"info","ts":"2024-07-23T15:25:18.128596Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T15:25:18.134515Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe8c4457455e3a5","initial-advertise-peer-urls":["https://192.168.39.200:2380"],"listen-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T15:25:18.134571Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T15:25:18.129021Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-07-23T15:25:18.13461Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-07-23T15:25:18.643399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-23T15:25:18.643456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-23T15:25:18.643504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgPreVoteResp from fe8c4457455e3a5 at term 1"}
	{"level":"info","ts":"2024-07-23T15:25:18.643517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:25:18.643522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2024-07-23T15:25:18.64353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became leader at term 2"}
	{"level":"info","ts":"2024-07-23T15:25:18.643537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2024-07-23T15:25:18.64762Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe8c4457455e3a5","local-member-attributes":"{Name:embed-certs-486436 ClientURLs:[https://192.168.39.200:2379]}","request-path":"/0/members/fe8c4457455e3a5/attributes","cluster-id":"1d37198946ef4128","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:25:18.64777Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:25:18.648413Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:25:18.658202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-07-23T15:25:18.663369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:25:18.66363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:25:18.66366Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:25:18.665127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T15:25:18.692215Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:25:18.694792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:25:18.712384Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	2024/07/23 15:25:22 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 15:34:43 up 14 min,  0 users,  load average: 0.01, 0.07, 0.08
	Linux embed-certs-486436 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18] <==
	I0723 15:28:39.057001       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:30:20.315890       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:30:20.316010       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0723 15:30:21.316835       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:30:21.316897       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:30:21.316910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:30:21.317018       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:30:21.317085       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:30:21.318305       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:31:21.317950       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:31:21.318008       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:31:21.318017       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:31:21.319277       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:31:21.319390       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:31:21.319401       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:33:21.318445       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:33:21.318533       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:33:21.318544       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:33:21.320575       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:33:21.320684       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:33:21.320710       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa] <==
	I0723 15:29:06.272267       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:29:35.829716       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:29:36.279789       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:30:05.836682       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:30:06.288134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:30:35.842493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:30:36.297121       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:31:05.847788       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:31:06.304665       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:31:30.980576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="365.419µs"
	E0723 15:31:35.853678       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:31:36.312543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:31:42.977249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="126.836µs"
	E0723 15:32:05.858739       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:32:06.320981       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:32:35.864372       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:32:36.328223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:33:05.869108       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:33:06.337757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:33:35.875373       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:33:36.346778       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:34:05.880529       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:34:06.355507       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:34:35.885703       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:34:36.363078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd] <==
	I0723 15:25:37.498824       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:25:37.516043       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0723 15:25:37.620717       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 15:25:37.620752       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 15:25:37.620768       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:25:37.638743       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:25:37.638965       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:25:37.638985       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:25:37.650101       1 config.go:192] "Starting service config controller"
	I0723 15:25:37.650131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:25:37.650159       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:25:37.650162       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:25:37.650179       1 config.go:319] "Starting node config controller"
	I0723 15:25:37.650182       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:25:37.750522       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:25:37.750651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:25:37.750450       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047] <==
	W0723 15:25:20.368870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 15:25:20.370412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 15:25:20.372434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 15:25:20.372524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 15:25:21.174323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:25:21.174391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:25:21.177404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 15:25:21.177429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 15:25:21.203853       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 15:25:21.203970       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 15:25:21.241613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 15:25:21.242017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 15:25:21.316484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0723 15:25:21.316537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0723 15:25:21.390277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 15:25:21.390383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 15:25:21.396321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 15:25:21.396423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 15:25:21.476967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:25:21.477065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:25:21.526210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 15:25:21.526266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 15:25:21.567323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 15:25:21.567395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0723 15:25:24.049595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 15:32:22 embed-certs-486436 kubelet[3910]: E0723 15:32:22.974579    3910 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:32:22 embed-certs-486436 kubelet[3910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:32:22 embed-certs-486436 kubelet[3910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:32:22 embed-certs-486436 kubelet[3910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:32:22 embed-certs-486436 kubelet[3910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:32:33 embed-certs-486436 kubelet[3910]: E0723 15:32:33.959606    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:32:47 embed-certs-486436 kubelet[3910]: E0723 15:32:47.959659    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:32:59 embed-certs-486436 kubelet[3910]: E0723 15:32:59.959817    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:33:10 embed-certs-486436 kubelet[3910]: E0723 15:33:10.961724    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:33:22 embed-certs-486436 kubelet[3910]: E0723 15:33:22.960507    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:33:22 embed-certs-486436 kubelet[3910]: E0723 15:33:22.976401    3910 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:33:22 embed-certs-486436 kubelet[3910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:33:22 embed-certs-486436 kubelet[3910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:33:22 embed-certs-486436 kubelet[3910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:33:22 embed-certs-486436 kubelet[3910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:33:37 embed-certs-486436 kubelet[3910]: E0723 15:33:37.959919    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:33:49 embed-certs-486436 kubelet[3910]: E0723 15:33:49.960569    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:34:02 embed-certs-486436 kubelet[3910]: E0723 15:34:02.960179    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:34:15 embed-certs-486436 kubelet[3910]: E0723 15:34:15.959519    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:34:22 embed-certs-486436 kubelet[3910]: E0723 15:34:22.974042    3910 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:34:22 embed-certs-486436 kubelet[3910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:34:22 embed-certs-486436 kubelet[3910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:34:22 embed-certs-486436 kubelet[3910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:34:22 embed-certs-486436 kubelet[3910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:34:30 embed-certs-486436 kubelet[3910]: E0723 15:34:30.961682    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	
	
	==> storage-provisioner [2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895] <==
	I0723 15:25:38.599322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 15:25:38.615738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 15:25:38.615848       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 15:25:38.625001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 15:25:38.627959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-486436_c052ae9f-7be6-4d77-b6ec-28b68b200921!
	I0723 15:25:38.630593       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51c2b8cb-8e74-45ca-81fa-08ae25bfe6af", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-486436_c052ae9f-7be6-4d77-b6ec-28b68b200921 became leader
	I0723 15:25:38.728986       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-486436_c052ae9f-7be6-4d77-b6ec-28b68b200921!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-486436 -n embed-certs-486436
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-486436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-7l2jw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-486436 describe pod metrics-server-569cc877fc-7l2jw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-486436 describe pod metrics-server-569cc877fc-7l2jw: exit status 1 (74.520545ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-7l2jw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-486436 describe pod metrics-server-569cc877fc-7l2jw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (545.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (545.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-23 15:34:41.612292806 +0000 UTC m=+5890.818037537
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-911217 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-911217 logs -n 25: (2.404040323s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-193974                              | stopped-upgrade-193974       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-543029             | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-543029                                   | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-486436            | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272        | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-518198 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | disable-driver-mounts-518198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:18:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:18:41.988416   66641 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:18:41.988512   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988520   66641 out.go:304] Setting ErrFile to fd 2...
	I0723 15:18:41.988525   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988683   66641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:18:41.989181   66641 out.go:298] Setting JSON to false
	I0723 15:18:41.990049   66641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7268,"bootTime":1721740654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:18:41.990101   66641 start.go:139] virtualization: kvm guest
	I0723 15:18:41.992106   66641 out.go:177] * [default-k8s-diff-port-911217] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:18:41.993366   66641 notify.go:220] Checking for updates...
	I0723 15:18:41.993387   66641 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:18:41.994650   66641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:18:41.995849   66641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:18:41.997045   66641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:18:41.998236   66641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:18:41.999412   66641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:18:42.001155   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:18:42.001533   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.001596   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.016186   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0723 15:18:42.016616   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.017209   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.017230   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.017528   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.017699   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.017927   66641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:18:42.018205   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.018235   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.032467   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0723 15:18:42.032800   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.033214   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.033236   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.033544   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.033718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.065773   66641 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:18:42.067127   66641 start.go:297] selected driver: kvm2
	I0723 15:18:42.067142   66641 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.067236   66641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:18:42.067871   66641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.067939   66641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:18:42.083220   66641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:18:42.083563   66641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:18:42.083627   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:18:42.083641   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:18:42.083677   66641 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.083772   66641 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.085608   66641 out.go:177] * Starting "default-k8s-diff-port-911217" primary control-plane node in "default-k8s-diff-port-911217" cluster
	I0723 15:18:42.394642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:42.086917   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:18:42.086954   66641 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:18:42.086961   66641 cache.go:56] Caching tarball of preloaded images
	I0723 15:18:42.087024   66641 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:18:42.087034   66641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:18:42.087125   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:18:42.087294   66641 start.go:360] acquireMachinesLock for default-k8s-diff-port-911217: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:18:45.466731   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:51.546673   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:54.618775   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:00.698667   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:03.770734   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:09.850627   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:12.922681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:19.002679   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:22.074678   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:28.154680   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:31.226704   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:37.306625   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:40.378652   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:46.458657   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:49.530693   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:55.610642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:58.682681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:20:01.686613   65177 start.go:364] duration metric: took 4m13.413067096s to acquireMachinesLock for "embed-certs-486436"
	I0723 15:20:01.686692   65177 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:01.686702   65177 fix.go:54] fixHost starting: 
	I0723 15:20:01.687041   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:01.687070   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:01.702700   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0723 15:20:01.703107   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:01.703623   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:20:01.703649   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:01.704019   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:01.704222   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:01.704417   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:20:01.706547   65177 fix.go:112] recreateIfNeeded on embed-certs-486436: state=Stopped err=<nil>
	I0723 15:20:01.706583   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	W0723 15:20:01.706810   65177 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:01.708411   65177 out.go:177] * Restarting existing kvm2 VM for "embed-certs-486436" ...
	I0723 15:20:01.709393   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Start
	I0723 15:20:01.709559   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring networks are active...
	I0723 15:20:01.710353   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network default is active
	I0723 15:20:01.710733   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network mk-embed-certs-486436 is active
	I0723 15:20:01.711060   65177 main.go:141] libmachine: (embed-certs-486436) Getting domain xml...
	I0723 15:20:01.711832   65177 main.go:141] libmachine: (embed-certs-486436) Creating domain...
	I0723 15:20:02.915930   65177 main.go:141] libmachine: (embed-certs-486436) Waiting to get IP...
	I0723 15:20:02.916770   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:02.917115   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:02.917188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:02.917097   66959 retry.go:31] will retry after 245.483954ms: waiting for machine to come up
	I0723 15:20:01.683920   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:01.683992   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684333   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:20:01.684360   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684537   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:20:01.686489   64842 machine.go:97] duration metric: took 4m34.539799868s to provisionDockerMachine
	I0723 15:20:01.686530   64842 fix.go:56] duration metric: took 4m34.563243323s for fixHost
	I0723 15:20:01.686547   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 4m34.563294357s
	W0723 15:20:01.686572   64842 start.go:714] error starting host: provision: host is not running
	W0723 15:20:01.686657   64842 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0723 15:20:01.686668   64842 start.go:729] Will try again in 5 seconds ...
	I0723 15:20:03.164587   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.165021   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.165067   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.164972   66959 retry.go:31] will retry after 387.950176ms: waiting for machine to come up
	I0723 15:20:03.554705   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.555161   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.555188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.555103   66959 retry.go:31] will retry after 404.807138ms: waiting for machine to come up
	I0723 15:20:03.961830   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.962290   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.962323   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.962236   66959 retry.go:31] will retry after 570.61318ms: waiting for machine to come up
	I0723 15:20:04.534152   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:04.534702   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:04.534731   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:04.534650   66959 retry.go:31] will retry after 542.857217ms: waiting for machine to come up
	I0723 15:20:05.079445   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.079866   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.079894   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.079811   66959 retry.go:31] will retry after 653.88428ms: waiting for machine to come up
	I0723 15:20:05.735919   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.736350   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.736381   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.736331   66959 retry.go:31] will retry after 871.798617ms: waiting for machine to come up
	I0723 15:20:06.609428   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:06.609885   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:06.609908   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:06.609854   66959 retry.go:31] will retry after 1.079464189s: waiting for machine to come up
	I0723 15:20:07.690706   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:07.691096   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:07.691122   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:07.691070   66959 retry.go:31] will retry after 1.414145571s: waiting for machine to come up
	I0723 15:20:06.687299   64842 start.go:360] acquireMachinesLock for no-preload-543029: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:20:09.107698   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:09.108062   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:09.108091   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:09.108012   66959 retry.go:31] will retry after 2.263313118s: waiting for machine to come up
	I0723 15:20:11.374573   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:11.375009   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:11.375035   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:11.374970   66959 retry.go:31] will retry after 2.600297505s: waiting for machine to come up
	I0723 15:20:13.978265   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:13.978707   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:13.978733   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:13.978653   66959 retry.go:31] will retry after 2.515380756s: waiting for machine to come up
	I0723 15:20:16.497458   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:16.497913   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:16.497945   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:16.497872   66959 retry.go:31] will retry after 3.863044954s: waiting for machine to come up
	I0723 15:20:21.587107   65605 start.go:364] duration metric: took 3m54.633068774s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:20:21.587168   65605 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:21.587179   65605 fix.go:54] fixHost starting: 
	I0723 15:20:21.587596   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:21.587632   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:21.608083   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0723 15:20:21.608563   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:21.609109   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:20:21.609148   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:21.609463   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:21.609679   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:21.609839   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:20:21.611555   65605 fix.go:112] recreateIfNeeded on old-k8s-version-000272: state=Stopped err=<nil>
	I0723 15:20:21.611590   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	W0723 15:20:21.611766   65605 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:21.614168   65605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	I0723 15:20:21.615607   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .Start
	I0723 15:20:21.615831   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:20:21.616640   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:20:21.617122   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:20:21.617591   65605 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:20:21.618346   65605 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:20:20.365141   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365653   65177 main.go:141] libmachine: (embed-certs-486436) Found IP for machine: 192.168.39.200
	I0723 15:20:20.365671   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365677   65177 main.go:141] libmachine: (embed-certs-486436) Reserving static IP address...
	I0723 15:20:20.366319   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.366340   65177 main.go:141] libmachine: (embed-certs-486436) DBG | skip adding static IP to network mk-embed-certs-486436 - found existing host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"}
	I0723 15:20:20.366351   65177 main.go:141] libmachine: (embed-certs-486436) Reserved static IP address: 192.168.39.200
	I0723 15:20:20.366360   65177 main.go:141] libmachine: (embed-certs-486436) Waiting for SSH to be available...
	I0723 15:20:20.366367   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Getting to WaitForSSH function...
	I0723 15:20:20.368870   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369217   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.369239   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369431   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH client type: external
	I0723 15:20:20.369462   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa (-rw-------)
	I0723 15:20:20.369485   65177 main.go:141] libmachine: (embed-certs-486436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:20.369495   65177 main.go:141] libmachine: (embed-certs-486436) DBG | About to run SSH command:
	I0723 15:20:20.369505   65177 main.go:141] libmachine: (embed-certs-486436) DBG | exit 0
	I0723 15:20:20.494158   65177 main.go:141] libmachine: (embed-certs-486436) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:20.494591   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetConfigRaw
	I0723 15:20:20.495255   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.497821   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498094   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.498124   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498346   65177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/config.json ...
	I0723 15:20:20.498558   65177 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:20.498577   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:20.498808   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.500819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501138   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.501166   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.501481   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501643   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501770   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.501926   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.502215   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.502231   65177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:20.606234   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:20.606264   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606556   65177 buildroot.go:166] provisioning hostname "embed-certs-486436"
	I0723 15:20:20.606598   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606793   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.609446   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609801   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.609838   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609990   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.610137   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610468   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.610650   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.610813   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.610825   65177 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-486436 && echo "embed-certs-486436" | sudo tee /etc/hostname
	I0723 15:20:20.727215   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-486436
	
	I0723 15:20:20.727239   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.730058   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730363   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.730411   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730552   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.730741   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.730911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.731048   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.731204   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.731364   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.731380   65177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-486436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-486436/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-486436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:20.844079   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:20.844109   65177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:20.844128   65177 buildroot.go:174] setting up certificates
	I0723 15:20:20.844135   65177 provision.go:84] configureAuth start
	I0723 15:20:20.844145   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.844400   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.846867   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847192   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.847220   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847342   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.849457   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849786   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.849829   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849937   65177 provision.go:143] copyHostCerts
	I0723 15:20:20.849992   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:20.850002   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:20.850068   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:20.850164   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:20.850172   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:20.850201   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:20.850263   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:20.850272   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:20.850293   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:20.850358   65177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.embed-certs-486436 san=[127.0.0.1 192.168.39.200 embed-certs-486436 localhost minikube]
	I0723 15:20:20.945454   65177 provision.go:177] copyRemoteCerts
	I0723 15:20:20.945511   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:20.945536   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.948316   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948605   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.948639   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948797   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.948981   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.949142   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.949267   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.032367   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:20:21.054529   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:21.076049   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 15:20:21.098274   65177 provision.go:87] duration metric: took 254.126202ms to configureAuth
	I0723 15:20:21.098303   65177 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:21.098510   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:20:21.098600   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.100971   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101307   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.101341   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101520   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.101687   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.101828   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.102031   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.102187   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.102375   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.102418   65177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:21.359179   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:21.359214   65177 machine.go:97] duration metric: took 860.640697ms to provisionDockerMachine
	I0723 15:20:21.359230   65177 start.go:293] postStartSetup for "embed-certs-486436" (driver="kvm2")
	I0723 15:20:21.359244   65177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:21.359265   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.359777   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:21.359804   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.362611   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.362936   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.362963   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.363138   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.363311   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.363497   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.363669   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.444572   65177 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:21.448633   65177 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:21.448662   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:21.448733   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:21.448817   65177 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:21.448925   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:21.457699   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:21.480387   65177 start.go:296] duration metric: took 121.140622ms for postStartSetup
	I0723 15:20:21.480431   65177 fix.go:56] duration metric: took 19.793728867s for fixHost
	I0723 15:20:21.480449   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.483324   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483667   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.483690   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483854   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.484057   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484211   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484353   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.484516   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.484692   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.484703   65177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:21.586960   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748021.563549452
	
	I0723 15:20:21.586982   65177 fix.go:216] guest clock: 1721748021.563549452
	I0723 15:20:21.586989   65177 fix.go:229] Guest: 2024-07-23 15:20:21.563549452 +0000 UTC Remote: 2024-07-23 15:20:21.480435025 +0000 UTC m=+273.351160165 (delta=83.114427ms)
	I0723 15:20:21.587010   65177 fix.go:200] guest clock delta is within tolerance: 83.114427ms
	I0723 15:20:21.587016   65177 start.go:83] releasing machines lock for "embed-certs-486436", held for 19.900344761s
	I0723 15:20:21.587045   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.587363   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:21.590600   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.590998   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.591041   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.591194   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591723   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591965   65177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:21.592024   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.592172   65177 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:21.592190   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.594877   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595266   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595337   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595387   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595502   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595698   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.595751   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595776   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595837   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.595909   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595998   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.596083   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.596218   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.596369   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.709871   65177 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:21.717210   65177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:21.866461   65177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:21.871904   65177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:21.871979   65177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:21.888197   65177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:21.888226   65177 start.go:495] detecting cgroup driver to use...
	I0723 15:20:21.888339   65177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:21.903857   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:21.917841   65177 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:21.917917   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:21.935814   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:21.949898   65177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:22.066137   65177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:22.208517   65177 docker.go:233] disabling docker service ...
	I0723 15:20:22.208606   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:22.222583   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:22.235322   65177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:22.380324   65177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:22.513404   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:22.529676   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:22.546980   65177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:20:22.547050   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.556656   65177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:22.556723   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.566410   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.576269   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.586125   65177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:22.597824   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.608136   65177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.628391   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.642862   65177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:22.652564   65177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:22.652625   65177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:22.667485   65177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:22.677669   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:22.809762   65177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:22.947870   65177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:22.947955   65177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:22.952570   65177 start.go:563] Will wait 60s for crictl version
	I0723 15:20:22.952672   65177 ssh_runner.go:195] Run: which crictl
	I0723 15:20:22.956658   65177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:22.997591   65177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:22.997719   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.030830   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.060406   65177 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:20:23.061617   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:23.065154   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065547   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:23.065572   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065845   65177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:23.070019   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:23.082226   65177 kubeadm.go:883] updating cluster {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:23.082414   65177 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:20:23.082490   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:23.117427   65177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:20:23.117505   65177 ssh_runner.go:195] Run: which lz4
	I0723 15:20:23.121380   65177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:23.125694   65177 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:23.125721   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:20:22.904910   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:20:22.905969   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:22.906448   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:22.906508   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:22.906424   67094 retry.go:31] will retry after 215.638875ms: waiting for machine to come up
	I0723 15:20:23.124008   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.124474   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.124510   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.124440   67094 retry.go:31] will retry after 380.753429ms: waiting for machine to come up
	I0723 15:20:23.507362   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.507777   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.507803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.507744   67094 retry.go:31] will retry after 385.253161ms: waiting for machine to come up
	I0723 15:20:23.894227   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.894675   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.894697   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.894627   67094 retry.go:31] will retry after 533.715559ms: waiting for machine to come up
	I0723 15:20:24.429811   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:24.430290   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:24.430321   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:24.430242   67094 retry.go:31] will retry after 637.033889ms: waiting for machine to come up
	I0723 15:20:25.068770   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.069313   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.069345   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.069274   67094 retry.go:31] will retry after 796.484567ms: waiting for machine to come up
	I0723 15:20:25.867223   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.867663   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.867693   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.867604   67094 retry.go:31] will retry after 845.920319ms: waiting for machine to come up
	I0723 15:20:26.715077   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:26.715612   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:26.715643   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:26.715566   67094 retry.go:31] will retry after 1.265268276s: waiting for machine to come up
	I0723 15:20:24.399306   65177 crio.go:462] duration metric: took 1.277970642s to copy over tarball
	I0723 15:20:24.399409   65177 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:26.603797   65177 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204354868s)
	I0723 15:20:26.603830   65177 crio.go:469] duration metric: took 2.204493799s to extract the tarball
	I0723 15:20:26.603839   65177 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:26.641498   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:26.682771   65177 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:20:26.682793   65177 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:20:26.682802   65177 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0723 15:20:26.682948   65177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-486436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:26.683021   65177 ssh_runner.go:195] Run: crio config
	I0723 15:20:26.734908   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:26.734934   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:26.734947   65177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:26.734979   65177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-486436 NodeName:embed-certs-486436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:20:26.735162   65177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-486436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:26.735247   65177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:20:26.746266   65177 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:26.746334   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:26.756387   65177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0723 15:20:26.771870   65177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:26.789639   65177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0723 15:20:26.807608   65177 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:26.811134   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:26.823851   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:26.952899   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:26.969453   65177 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436 for IP: 192.168.39.200
	I0723 15:20:26.969484   65177 certs.go:194] generating shared ca certs ...
	I0723 15:20:26.969503   65177 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:26.969694   65177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:26.969757   65177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:26.969770   65177 certs.go:256] generating profile certs ...
	I0723 15:20:26.969897   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/client.key
	I0723 15:20:26.969978   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key.8481dffb
	I0723 15:20:26.970038   65177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key
	I0723 15:20:26.970164   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:26.970203   65177 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:26.970216   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:26.970255   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:26.970279   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:26.970309   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:26.970369   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:26.971269   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:27.026302   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:27.075563   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:27.109194   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:27.136748   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0723 15:20:27.159391   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:20:27.181933   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:27.203549   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:27.225473   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:27.254497   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:27.275874   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:27.299275   65177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:27.316223   65177 ssh_runner.go:195] Run: openssl version
	I0723 15:20:27.322037   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:27.333546   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337890   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337945   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.343624   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:27.354738   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:27.365915   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370038   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370101   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.375514   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:27.386502   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:27.396611   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400879   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400978   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.406132   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:27.415738   65177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:27.419755   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:27.424982   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:27.430277   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:27.435794   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:27.441244   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:27.446515   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:27.451968   65177 kubeadm.go:392] StartCluster: {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:27.452053   65177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:27.452102   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.488671   65177 cri.go:89] found id: ""
	I0723 15:20:27.488758   65177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:27.498621   65177 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:27.498639   65177 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:27.498690   65177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:27.510485   65177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:27.511796   65177 kubeconfig.go:125] found "embed-certs-486436" server: "https://192.168.39.200:8443"
	I0723 15:20:27.513749   65177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:27.525206   65177 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I0723 15:20:27.525258   65177 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:27.525275   65177 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:27.525354   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.563337   65177 cri.go:89] found id: ""
	I0723 15:20:27.563411   65177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:27.583886   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:27.595493   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:27.595513   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:27.595591   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:27.606537   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:27.606596   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:27.616130   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:27.624277   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:27.624335   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:27.632787   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.641057   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:27.641113   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.649516   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:27.657977   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:27.658021   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:27.666489   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:27.675023   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.777750   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.982818   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:27.983136   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:27.983157   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:27.983112   67094 retry.go:31] will retry after 1.681215174s: waiting for machine to come up
	I0723 15:20:29.667369   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:29.667816   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:29.667846   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:29.667773   67094 retry.go:31] will retry after 1.742302977s: waiting for machine to come up
	I0723 15:20:31.412567   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:31.413046   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:31.413074   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:31.412990   67094 retry.go:31] will retry after 2.618033682s: waiting for machine to come up
	I0723 15:20:28.659756   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.867793   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.952107   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:29.020498   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:29.020632   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:29.521001   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.021488   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.520765   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.021749   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.521145   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.535745   65177 api_server.go:72] duration metric: took 2.515246955s to wait for apiserver process to appear ...
	I0723 15:20:31.535779   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:20:31.535802   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.561351   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.561400   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:33.561416   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.580699   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.580735   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:34.036231   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.045563   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.045603   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:34.536119   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.549417   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.549447   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:35.035956   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:35.040331   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:20:35.046883   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:20:35.046909   65177 api_server.go:131] duration metric: took 3.511123729s to wait for apiserver health ...
	I0723 15:20:35.046918   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:35.046924   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:35.048858   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:20:34.034295   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:34.034660   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:34.034682   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:34.034634   67094 retry.go:31] will retry after 2.832404848s: waiting for machine to come up
	I0723 15:20:35.050411   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:20:35.061924   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:20:35.088990   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:20:35.102746   65177 system_pods.go:59] 8 kube-system pods found
	I0723 15:20:35.102778   65177 system_pods.go:61] "coredns-7db6d8ff4d-v842j" [f3509de1-edf7-46c4-af5b-89338770d2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:20:35.102786   65177 system_pods.go:61] "etcd-embed-certs-486436" [46b72abd-c16d-452d-8c17-909fd2a25fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:20:35.102796   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [2ce2344f-5ddc-438b-8f16-338bc266da83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:20:35.102804   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [3f483328-583f-4c71-8372-db418f593b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:20:35.102812   65177 system_pods.go:61] "kube-proxy-f4vfh" [00e430df-ccc5-463d-96f9-288e2e611e2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:20:35.102822   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [0c581c3d-78ab-47d8-81a8-9d176192a94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:20:35.102829   65177 system_pods.go:61] "metrics-server-569cc877fc-rq67z" [b6371591-2fac-47f5-b20b-635c9f0755c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:20:35.102840   65177 system_pods.go:61] "storage-provisioner" [a0545674-2bfc-48b4-940e-cdedf02c5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:20:35.102849   65177 system_pods.go:74] duration metric: took 13.834305ms to wait for pod list to return data ...
	I0723 15:20:35.102857   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:20:35.106953   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:20:35.106977   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:20:35.106991   65177 node_conditions.go:105] duration metric: took 4.127613ms to run NodePressure ...
	I0723 15:20:35.107010   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:35.395355   65177 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399496   65177 kubeadm.go:739] kubelet initialised
	I0723 15:20:35.399514   65177 kubeadm.go:740] duration metric: took 4.133847ms waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399521   65177 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:20:35.404293   65177 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.408404   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408423   65177 pod_ready.go:81] duration metric: took 4.111276ms for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.408431   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408440   65177 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.412361   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412379   65177 pod_ready.go:81] duration metric: took 3.929729ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.412391   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412403   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.416588   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416603   65177 pod_ready.go:81] duration metric: took 4.193735ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.416610   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416616   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.492691   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492715   65177 pod_ready.go:81] duration metric: took 76.092496ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.492724   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492731   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892820   65177 pod_ready.go:92] pod "kube-proxy-f4vfh" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:35.892843   65177 pod_ready.go:81] duration metric: took 400.103193ms for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892853   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:37.898159   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:36.869147   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:36.869555   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:36.869593   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:36.869499   67094 retry.go:31] will retry after 4.334096738s: waiting for machine to come up
	I0723 15:20:41.208992   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209340   65605 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:20:41.209364   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:20:41.209382   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209808   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.209843   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | skip adding static IP to network mk-old-k8s-version-000272 - found existing host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"}
	I0723 15:20:41.209862   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:20:41.209878   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:20:41.209916   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:20:41.211671   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.211918   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.211956   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.212110   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:20:41.212139   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:20:41.212191   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:41.212211   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:20:41.212229   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:20:41.334852   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:41.335260   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:20:41.335965   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.338425   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.338803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.338842   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.339024   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:20:41.339218   65605 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:41.339235   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:41.339476   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.341528   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.341881   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.341909   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.342008   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.342192   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342352   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342502   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.342674   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.342855   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.342865   65605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:41.442564   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:41.442592   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.442857   65605 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:20:41.442872   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.443076   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.445976   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446389   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.446429   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446553   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.446719   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.446972   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.447096   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.447249   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.447418   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.447434   65605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:20:41.559708   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:20:41.559739   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.562630   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.562954   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.562977   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.563156   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.563340   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563501   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563596   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.563779   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.563977   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.564006   65605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:41.671327   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:41.671363   65605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:41.671396   65605 buildroot.go:174] setting up certificates
	I0723 15:20:41.671407   65605 provision.go:84] configureAuth start
	I0723 15:20:41.671418   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.671766   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.674340   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.674812   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.674848   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.675019   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.677052   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677386   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.677418   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677568   65605 provision.go:143] copyHostCerts
	I0723 15:20:41.677636   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:41.677651   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:41.677715   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:41.677826   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:41.677836   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:41.677866   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:41.677939   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:41.677949   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:41.677975   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:41.678039   65605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:20:41.745999   65605 provision.go:177] copyRemoteCerts
	I0723 15:20:41.746077   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:41.746123   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.748908   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749226   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.749252   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749417   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.749616   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.749771   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.749903   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:41.828867   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:42.386874   66641 start.go:364] duration metric: took 2m0.299552173s to acquireMachinesLock for "default-k8s-diff-port-911217"
	I0723 15:20:42.386943   66641 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:42.386951   66641 fix.go:54] fixHost starting: 
	I0723 15:20:42.387316   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:42.387356   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:42.405492   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0723 15:20:42.405947   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:42.406493   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:20:42.406517   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:42.406843   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:42.407031   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:20:42.407169   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:20:42.408621   66641 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911217: state=Stopped err=<nil>
	I0723 15:20:42.408657   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	W0723 15:20:42.408798   66641 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:42.410540   66641 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-911217" ...
	I0723 15:20:39.899515   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.903102   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.852296   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:20:41.874579   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:20:41.897065   65605 provision.go:87] duration metric: took 225.644058ms to configureAuth
	I0723 15:20:41.897095   65605 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:41.897287   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:20:41.897354   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.900232   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902335   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.902328   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.902412   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902623   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.902826   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.903015   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.903209   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.903388   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.903407   65605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:42.162998   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:42.163019   65605 machine.go:97] duration metric: took 823.789368ms to provisionDockerMachine
	I0723 15:20:42.163030   65605 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:20:42.163040   65605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:42.163054   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.163444   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:42.163471   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.166193   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.166628   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.166842   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.167037   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.167181   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.248364   65605 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:42.252403   65605 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:42.252433   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:42.252504   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:42.252596   65605 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:42.252693   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:42.262571   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:42.285115   65605 start.go:296] duration metric: took 122.072017ms for postStartSetup
	I0723 15:20:42.285160   65605 fix.go:56] duration metric: took 20.697977265s for fixHost
	I0723 15:20:42.285180   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.287760   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288032   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.288062   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288187   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.288428   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288606   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288799   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.289000   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:42.289216   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:42.289232   65605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:42.386682   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748042.363547028
	
	I0723 15:20:42.386711   65605 fix.go:216] guest clock: 1721748042.363547028
	I0723 15:20:42.386723   65605 fix.go:229] Guest: 2024-07-23 15:20:42.363547028 +0000 UTC Remote: 2024-07-23 15:20:42.285164316 +0000 UTC m=+255.470399434 (delta=78.382712ms)
	I0723 15:20:42.386754   65605 fix.go:200] guest clock delta is within tolerance: 78.382712ms
	I0723 15:20:42.386765   65605 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 20.799620907s
	I0723 15:20:42.386796   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.387067   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:42.390116   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390543   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.390589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390703   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391215   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391395   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391482   65605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:42.391527   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.391645   65605 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:42.391670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.394373   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394732   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.394757   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394924   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395081   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395245   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.395286   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.395331   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.395428   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.395579   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395726   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395963   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.396145   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.499940   65605 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:42.505917   65605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:42.646731   65605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:42.652550   65605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:42.652612   65605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:42.667337   65605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:42.667357   65605 start.go:495] detecting cgroup driver to use...
	I0723 15:20:42.667419   65605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:42.681839   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:42.694833   65605 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:42.694888   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:42.707800   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:42.720914   65605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:42.844082   65605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:43.024993   65605 docker.go:233] disabling docker service ...
	I0723 15:20:43.025076   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:43.057263   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:43.070881   65605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:43.180616   65605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:43.295769   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:43.311341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:43.333719   65605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:20:43.333787   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.345261   65605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:43.345364   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.356669   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.366947   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.378177   65605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:43.390672   65605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:43.400591   65605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:43.400645   65605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:43.413974   65605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:43.423528   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:43.545030   65605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:43.685902   65605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:43.686018   65605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:43.691692   65605 start.go:563] Will wait 60s for crictl version
	I0723 15:20:43.691742   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:43.695470   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:43.733229   65605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:43.733329   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.765591   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.794762   65605 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:20:43.796073   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:43.799075   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799549   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:43.799585   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799780   65605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:43.803604   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:43.818919   65605 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:43.819019   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:20:43.819073   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:43.872208   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:43.872268   65605 ssh_runner.go:195] Run: which lz4
	I0723 15:20:43.876273   65605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:43.880532   65605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:43.880566   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:20:45.299916   65605 crio.go:462] duration metric: took 1.423681931s to copy over tarball
	I0723 15:20:45.299989   65605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:42.411787   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Start
	I0723 15:20:42.411942   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring networks are active...
	I0723 15:20:42.412743   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network default is active
	I0723 15:20:42.413086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network mk-default-k8s-diff-port-911217 is active
	I0723 15:20:42.413500   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Getting domain xml...
	I0723 15:20:42.414312   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Creating domain...
	I0723 15:20:43.688063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting to get IP...
	I0723 15:20:43.689007   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689403   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689503   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.689396   67258 retry.go:31] will retry after 291.635723ms: waiting for machine to come up
	I0723 15:20:43.982895   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983344   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.983269   67258 retry.go:31] will retry after 315.035251ms: waiting for machine to come up
	I0723 15:20:44.300029   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300502   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300544   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.300453   67258 retry.go:31] will retry after 314.08729ms: waiting for machine to come up
	I0723 15:20:44.615873   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616274   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616299   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.616221   67258 retry.go:31] will retry after 424.738509ms: waiting for machine to come up
	I0723 15:20:45.042987   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043464   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043522   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.043438   67258 retry.go:31] will retry after 711.273362ms: waiting for machine to come up
	I0723 15:20:45.755790   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756332   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756366   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.756261   67258 retry.go:31] will retry after 880.333826ms: waiting for machine to come up
	I0723 15:20:46.638270   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638815   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638859   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:46.638766   67258 retry.go:31] will retry after 733.311982ms: waiting for machine to come up
	I0723 15:20:43.398761   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:43.398790   65177 pod_ready.go:81] duration metric: took 7.505930182s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:43.398803   65177 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:45.406572   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:47.406841   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:48.176598   65605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87658172s)
	I0723 15:20:48.176623   65605 crio.go:469] duration metric: took 2.876682557s to extract the tarball
	I0723 15:20:48.176632   65605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:48.221431   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:48.256729   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:48.256750   65605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:20:48.256833   65605 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.256883   65605 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.256906   65605 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.256840   65605 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.256896   65605 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.256841   65605 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.256851   65605 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:20:48.256858   65605 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258836   65605 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.258855   65605 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.258867   65605 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:20:48.258913   65605 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.258840   65605 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258841   65605 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.258842   65605 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.258906   65605 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.548121   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.552098   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.552418   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.560834   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.580417   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:20:48.590031   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.619770   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.633302   65605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:20:48.633365   65605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.633414   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.660305   65605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:20:48.660383   65605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.660439   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.691792   65605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:20:48.691853   65605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.691902   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707832   65605 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:20:48.707867   65605 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:20:48.707901   65605 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:20:48.707917   65605 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.707945   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707957   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.722912   65605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:20:48.722960   65605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.723012   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729754   65605 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:20:48.729792   65605 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.729820   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.729874   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.729826   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729827   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.730025   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:20:48.730037   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.730113   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.848335   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:20:48.849228   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.849310   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:20:48.858540   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:20:48.858650   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:20:48.858711   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:20:48.858750   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:20:48.889577   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:20:49.134808   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:49.273570   65605 cache_images.go:92] duration metric: took 1.016803126s to LoadCachedImages
	W0723 15:20:49.273670   65605 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0723 15:20:49.273686   65605 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:20:49.273808   65605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:49.273902   65605 ssh_runner.go:195] Run: crio config
	I0723 15:20:49.321968   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:20:49.321995   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:49.322007   65605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:49.322028   65605 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:20:49.322208   65605 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:49.322292   65605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:20:49.332563   65605 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:49.332636   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:49.345174   65605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:20:49.364369   65605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:49.379807   65605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:20:49.396643   65605 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:49.400437   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:49.412291   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:49.539360   65605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:49.556165   65605 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:20:49.556198   65605 certs.go:194] generating shared ca certs ...
	I0723 15:20:49.556218   65605 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:49.556393   65605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:49.556448   65605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:49.556457   65605 certs.go:256] generating profile certs ...
	I0723 15:20:49.556574   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:20:49.556652   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:20:49.556699   65605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:20:49.556845   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:49.556900   65605 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:49.556913   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:49.556947   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:49.557001   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:49.557036   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:49.557087   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:49.557993   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:49.605662   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:49.639122   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:49.665264   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:49.691008   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:20:49.723820   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:20:49.750608   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:49.776942   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:49.809923   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:49.834935   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:49.857389   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:49.880619   65605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:49.897369   65605 ssh_runner.go:195] Run: openssl version
	I0723 15:20:49.902878   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:49.913861   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918296   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918359   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.924159   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:49.936081   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:49.947674   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952040   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952090   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.957714   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:49.969333   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:49.981037   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985257   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985303   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.991083   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:50.002977   65605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:50.007497   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:50.013359   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:50.019202   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:50.025182   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:50.030979   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:50.036818   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:50.042573   65605 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:50.042687   65605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:50.042734   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.084635   65605 cri.go:89] found id: ""
	I0723 15:20:50.084714   65605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:50.096501   65605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:50.096521   65605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:50.096585   65605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:50.107443   65605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:50.108742   65605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:20:50.109665   65605 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-000272" cluster setting kubeconfig missing "old-k8s-version-000272" context setting]
	I0723 15:20:50.111089   65605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:50.178975   65605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:50.190920   65605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0723 15:20:50.190961   65605 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:50.190972   65605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:50.191033   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.230879   65605 cri.go:89] found id: ""
	I0723 15:20:50.230972   65605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:50.247994   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:50.257490   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:50.257518   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:50.257576   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:50.266704   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:50.266763   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:50.276276   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:50.285533   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:50.285613   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:50.294642   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.303358   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:50.303414   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.313060   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:50.322294   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:50.322364   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:50.331659   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:50.341120   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:50.460900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.327126   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.576244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.662730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.762087   65605 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:51.762179   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:47.373536   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374064   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374096   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:47.373991   67258 retry.go:31] will retry after 1.176593909s: waiting for machine to come up
	I0723 15:20:48.552701   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553183   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553216   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:48.553135   67258 retry.go:31] will retry after 1.485919187s: waiting for machine to come up
	I0723 15:20:50.040417   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040886   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:50.040808   67258 retry.go:31] will retry after 2.212005186s: waiting for machine to come up
	I0723 15:20:50.444583   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.905273   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.262683   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.763266   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.263151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.763313   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.262366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.763167   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.263068   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.762864   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.262305   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.762857   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.254679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255094   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:52.255018   67258 retry.go:31] will retry after 2.737596804s: waiting for machine to come up
	I0723 15:20:54.995373   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:54.995633   67258 retry.go:31] will retry after 2.363037622s: waiting for machine to come up
	I0723 15:20:55.405124   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:57.405898   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.767191   64842 start.go:364] duration metric: took 55.07978775s to acquireMachinesLock for "no-preload-543029"
	I0723 15:21:01.767250   64842 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:21:01.767261   64842 fix.go:54] fixHost starting: 
	I0723 15:21:01.767727   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:01.767763   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:01.785721   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0723 15:21:01.786113   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:01.786792   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:01.786819   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:01.787127   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:01.787328   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:01.787485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:01.789046   64842 fix.go:112] recreateIfNeeded on no-preload-543029: state=Stopped err=<nil>
	I0723 15:21:01.789080   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	W0723 15:21:01.789255   64842 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:21:01.791610   64842 out.go:177] * Restarting existing kvm2 VM for "no-preload-543029" ...
	I0723 15:20:57.263221   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.262445   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.762456   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.263288   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.763206   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.263158   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.762517   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.263183   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.762347   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.362159   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362567   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362593   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:57.362539   67258 retry.go:31] will retry after 2.888037123s: waiting for machine to come up
	I0723 15:21:00.253973   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.254583   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Found IP for machine: 192.168.61.64
	I0723 15:21:00.254603   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserving static IP address...
	I0723 15:21:00.254630   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has current primary IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.255048   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserved static IP address: 192.168.61.64
	I0723 15:21:00.255074   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for SSH to be available...
	I0723 15:21:00.255105   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.255130   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | skip adding static IP to network mk-default-k8s-diff-port-911217 - found existing host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"}
	I0723 15:21:00.255145   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Getting to WaitForSSH function...
	I0723 15:21:00.257683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258026   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.258054   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258147   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH client type: external
	I0723 15:21:00.258176   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa (-rw-------)
	I0723 15:21:00.258208   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:00.258220   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | About to run SSH command:
	I0723 15:21:00.258240   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | exit 0
	I0723 15:21:00.382323   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:00.382710   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetConfigRaw
	I0723 15:21:00.383397   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.386258   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.386718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386918   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:21:00.387143   66641 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:00.387164   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:00.387412   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.389494   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389798   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.389824   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389917   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.390082   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390237   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390438   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.390628   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.390842   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.390857   66641 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:00.486433   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:00.486468   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486725   66641 buildroot.go:166] provisioning hostname "default-k8s-diff-port-911217"
	I0723 15:21:00.486750   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486948   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.489770   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490120   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.490149   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490273   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.490475   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490671   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490882   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.491062   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.491230   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.491246   66641 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-911217 && echo "default-k8s-diff-port-911217" | sudo tee /etc/hostname
	I0723 15:21:00.603917   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-911217
	
	I0723 15:21:00.603953   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.606538   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.606898   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.606943   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.607069   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.607306   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607525   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607711   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.607920   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.608129   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.608147   66641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-911217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-911217/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-911217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:00.710852   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:00.710887   66641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:00.710915   66641 buildroot.go:174] setting up certificates
	I0723 15:21:00.710928   66641 provision.go:84] configureAuth start
	I0723 15:21:00.710939   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.711205   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.714141   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714519   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.714552   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714765   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.717395   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.717739   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717939   66641 provision.go:143] copyHostCerts
	I0723 15:21:00.718004   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:00.718020   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:00.718115   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:00.718237   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:00.718250   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:00.718284   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:00.718373   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:00.718401   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:00.718431   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:00.718522   66641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-911217 san=[127.0.0.1 192.168.61.64 default-k8s-diff-port-911217 localhost minikube]
	I0723 15:21:01.133831   66641 provision.go:177] copyRemoteCerts
	I0723 15:21:01.133894   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:01.133919   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.136913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.137359   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.137782   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.137944   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.138115   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.217531   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:01.241478   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0723 15:21:01.265056   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:21:01.287281   66641 provision.go:87] duration metric: took 576.341839ms to configureAuth
	I0723 15:21:01.287317   66641 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:01.287496   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:01.287579   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.290157   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290640   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.290668   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290775   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.290978   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.291509   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.291673   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.291688   66641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:01.540756   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:01.540783   66641 machine.go:97] duration metric: took 1.153625976s to provisionDockerMachine
	I0723 15:21:01.540796   66641 start.go:293] postStartSetup for "default-k8s-diff-port-911217" (driver="kvm2")
	I0723 15:21:01.540809   66641 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:01.540827   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.541189   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:01.541225   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.544068   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544486   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.544511   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544600   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.544788   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.544945   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.545154   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.625316   66641 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:01.629446   66641 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:01.629469   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:01.629529   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:01.629634   66641 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:01.629759   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:01.639896   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:01.663515   66641 start.go:296] duration metric: took 122.707128ms for postStartSetup
	I0723 15:21:01.663551   66641 fix.go:56] duration metric: took 19.276599962s for fixHost
	I0723 15:21:01.663569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.666406   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.666830   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.666861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.667086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.667290   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667487   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.667895   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.668100   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.668116   66641 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:01.767011   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748061.738020629
	
	I0723 15:21:01.767035   66641 fix.go:216] guest clock: 1721748061.738020629
	I0723 15:21:01.767043   66641 fix.go:229] Guest: 2024-07-23 15:21:01.738020629 +0000 UTC Remote: 2024-07-23 15:21:01.66355459 +0000 UTC m=+139.710056956 (delta=74.466039ms)
	I0723 15:21:01.767088   66641 fix.go:200] guest clock delta is within tolerance: 74.466039ms
	I0723 15:21:01.767097   66641 start.go:83] releasing machines lock for "default-k8s-diff-port-911217", held for 19.380180818s
	I0723 15:21:01.767122   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.767446   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:01.770143   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770575   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.770607   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770771   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771336   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771513   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771672   66641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:01.771722   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.771767   66641 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:01.771792   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.774913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775261   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775401   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775440   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775651   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.775783   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775835   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775851   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.775933   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.776044   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776119   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.776196   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.776293   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776455   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.887716   66641 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:01.894935   66641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:59.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.906133   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:02.040633   66641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:02.047908   66641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:02.047982   66641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:02.067565   66641 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:02.067589   66641 start.go:495] detecting cgroup driver to use...
	I0723 15:21:02.067648   66641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:02.083334   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:02.096435   66641 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:02.096501   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:02.109497   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:02.122475   66641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:02.238156   66641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:02.413213   66641 docker.go:233] disabling docker service ...
	I0723 15:21:02.413321   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:02.431076   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:02.443590   66641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:02.565848   66641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:02.708530   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:02.724781   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:02.744261   66641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:21:02.744317   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.755864   66641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:02.755939   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.768381   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.779157   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.789500   66641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:02.801063   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.812845   66641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.828742   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.840605   66641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:02.849796   66641 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:02.849866   66641 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:02.862982   66641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:02.874354   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:03.017881   66641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:03.157623   66641 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:03.157699   66641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:03.162343   66641 start.go:563] Will wait 60s for crictl version
	I0723 15:21:03.162429   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:21:03.166092   66641 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:03.203681   66641 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:03.203775   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.230722   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.257801   66641 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:21:01.793112   64842 main.go:141] libmachine: (no-preload-543029) Calling .Start
	I0723 15:21:01.793305   64842 main.go:141] libmachine: (no-preload-543029) Ensuring networks are active...
	I0723 15:21:01.794004   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network default is active
	I0723 15:21:01.794444   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network mk-no-preload-543029 is active
	I0723 15:21:01.794908   64842 main.go:141] libmachine: (no-preload-543029) Getting domain xml...
	I0723 15:21:01.795563   64842 main.go:141] libmachine: (no-preload-543029) Creating domain...
	I0723 15:21:03.126716   64842 main.go:141] libmachine: (no-preload-543029) Waiting to get IP...
	I0723 15:21:03.127667   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.128113   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.128193   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.128095   67435 retry.go:31] will retry after 265.57265ms: waiting for machine to come up
	I0723 15:21:03.395811   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.396355   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.396382   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.396301   67435 retry.go:31] will retry after 304.545362ms: waiting for machine to come up
	I0723 15:21:03.702841   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.703303   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.703332   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.703241   67435 retry.go:31] will retry after 326.35473ms: waiting for machine to come up
	I0723 15:21:04.032032   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.032670   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.032695   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.032568   67435 retry.go:31] will retry after 515.672537ms: waiting for machine to come up
	I0723 15:21:04.550461   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.550989   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.551019   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.550942   67435 retry.go:31] will retry after 735.237546ms: waiting for machine to come up
	I0723 15:21:05.287672   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.288362   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.288393   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.288259   67435 retry.go:31] will retry after 683.55844ms: waiting for machine to come up
	I0723 15:21:02.262289   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.763009   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.262852   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.763260   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.262964   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.762673   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.263335   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.762790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.262830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.762830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.259168   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:03.262241   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:03.262748   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262930   66641 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:03.266969   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:03.278873   66641 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:03.279019   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:21:03.279076   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.318295   66641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:21:03.318390   66641 ssh_runner.go:195] Run: which lz4
	I0723 15:21:03.322441   66641 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:21:03.326818   66641 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:21:03.326857   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:21:04.624581   66641 crio.go:462] duration metric: took 1.302205276s to copy over tarball
	I0723 15:21:04.624665   66641 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:21:06.913370   66641 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.288673981s)
	I0723 15:21:06.913403   66641 crio.go:469] duration metric: took 2.288793517s to extract the tarball
	I0723 15:21:06.913413   66641 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:21:06.951820   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.906766   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:06.405854   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:05.973409   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.973872   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.973920   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.973856   67435 retry.go:31] will retry after 728.120188ms: waiting for machine to come up
	I0723 15:21:06.703125   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:06.703631   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:06.703661   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:06.703554   67435 retry.go:31] will retry after 1.052851436s: waiting for machine to come up
	I0723 15:21:07.758261   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:07.758823   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:07.758853   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:07.758766   67435 retry.go:31] will retry after 1.533027844s: waiting for machine to come up
	I0723 15:21:09.293721   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:09.294204   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:09.294230   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:09.294169   67435 retry.go:31] will retry after 1.399702148s: waiting for machine to come up
	I0723 15:21:07.262935   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.762473   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.262990   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.262850   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.762245   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.263207   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.762516   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.263298   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.762853   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.993755   66641 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:21:06.993783   66641 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:21:06.993793   66641 kubeadm.go:934] updating node { 192.168.61.64 8444 v1.30.3 crio true true} ...
	I0723 15:21:06.993917   66641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-911217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:06.993994   66641 ssh_runner.go:195] Run: crio config
	I0723 15:21:07.040966   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:07.040991   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:07.041014   66641 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:07.041040   66641 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.64 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-911217 NodeName:default-k8s-diff-port-911217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:07.041222   66641 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.64
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-911217"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:07.041284   66641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:21:07.051498   66641 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:07.051567   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:07.060752   66641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0723 15:21:07.078362   66641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:21:07.093890   66641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0723 15:21:07.121632   66641 ssh_runner.go:195] Run: grep 192.168.61.64	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:07.126674   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:07.139521   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:07.264702   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:07.286475   66641 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217 for IP: 192.168.61.64
	I0723 15:21:07.286499   66641 certs.go:194] generating shared ca certs ...
	I0723 15:21:07.286521   66641 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:07.286750   66641 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:07.286814   66641 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:07.286829   66641 certs.go:256] generating profile certs ...
	I0723 15:21:07.286928   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/client.key
	I0723 15:21:07.286986   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key.a1750142
	I0723 15:21:07.287041   66641 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key
	I0723 15:21:07.287151   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:07.287182   66641 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:07.287191   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:07.287210   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:07.287233   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:07.287257   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:07.287288   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:07.288006   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:07.331680   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:07.378132   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:07.423720   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:07.462077   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 15:21:07.489608   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:21:07.511619   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:07.535480   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:07.557870   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:07.579317   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:07.601107   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:07.622717   66641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:07.638728   66641 ssh_runner.go:195] Run: openssl version
	I0723 15:21:07.644065   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:07.654161   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658261   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658335   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.663893   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:07.673883   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:07.684409   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688657   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688710   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.694037   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:07.704621   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:07.714866   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719090   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719133   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.724797   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:07.734660   66641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:07.739005   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:07.744615   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:07.749912   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:07.755350   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:07.760833   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:07.766701   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:07.773611   66641 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:07.773724   66641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:07.773788   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.812612   66641 cri.go:89] found id: ""
	I0723 15:21:07.812689   66641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:07.822628   66641 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:07.822648   66641 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:07.822699   66641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:07.831812   66641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:07.833459   66641 kubeconfig.go:125] found "default-k8s-diff-port-911217" server: "https://192.168.61.64:8444"
	I0723 15:21:07.836425   66641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:07.846945   66641 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.64
	I0723 15:21:07.846976   66641 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:07.846989   66641 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:07.847046   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.881091   66641 cri.go:89] found id: ""
	I0723 15:21:07.881180   66641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:07.900373   66641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:07.912010   66641 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:07.912035   66641 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:07.912092   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0723 15:21:07.920903   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:07.920981   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:07.930186   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0723 15:21:07.938825   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:07.938891   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:07.947852   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.957007   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:07.957076   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.966642   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0723 15:21:07.975395   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:07.975457   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:07.984363   66641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:07.993997   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:08.112135   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.260639   66641 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1484675s)
	I0723 15:21:09.260677   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.481542   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.546998   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.657302   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:09.657407   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.157632   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.658193   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.694922   66641 api_server.go:72] duration metric: took 1.037619978s to wait for apiserver process to appear ...
	I0723 15:21:10.694957   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:10.694980   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:08.406647   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:10.907117   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:13.783814   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.783855   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:13.783874   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:13.828920   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.828952   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:14.195191   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.199330   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.199350   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:14.695758   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.703433   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.703471   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:15.196096   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:15.200578   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:21:15.208499   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:21:15.208523   66641 api_server.go:131] duration metric: took 4.513559684s to wait for apiserver health ...
	I0723 15:21:15.208532   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:15.208539   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:15.210371   66641 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:10.696028   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:10.696532   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:10.696556   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:10.696480   67435 retry.go:31] will retry after 1.754927597s: waiting for machine to come up
	I0723 15:21:12.452705   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:12.453135   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:12.453164   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:12.453082   67435 retry.go:31] will retry after 2.354607493s: waiting for machine to come up
	I0723 15:21:14.809924   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:14.810438   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:14.810467   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:14.810400   67435 retry.go:31] will retry after 4.422072307s: waiting for machine to come up
	I0723 15:21:12.262754   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.762339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.262358   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.762291   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.262339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.762796   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.263008   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.762225   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.263100   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.762356   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.211787   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:15.226475   66641 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:15.245284   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:15.253756   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:15.253789   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:15.253798   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:15.253805   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:15.253815   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:15.253822   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:15.253828   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:15.253833   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:15.253838   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:15.253844   66641 system_pods.go:74] duration metric: took 8.537438ms to wait for pod list to return data ...
	I0723 15:21:15.253853   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:15.258127   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:15.258153   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:15.258163   66641 node_conditions.go:105] duration metric: took 4.305171ms to run NodePressure ...
	I0723 15:21:15.258177   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:15.533298   66641 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541967   66641 kubeadm.go:739] kubelet initialised
	I0723 15:21:15.541987   66641 kubeadm.go:740] duration metric: took 8.645977ms waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541995   66641 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:15.549557   66641 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.553971   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554002   66641 pod_ready.go:81] duration metric: took 4.418498ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.554013   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554022   66641 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.558017   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558040   66641 pod_ready.go:81] duration metric: took 4.009013ms for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.558050   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558058   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.562197   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562219   66641 pod_ready.go:81] duration metric: took 4.154836ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.562228   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562234   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.649441   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649466   66641 pod_ready.go:81] duration metric: took 87.224782ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.649477   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649484   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.049016   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049052   66641 pod_ready.go:81] duration metric: took 399.56194ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.049063   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049071   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.449193   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449221   66641 pod_ready.go:81] duration metric: took 400.140989ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.449231   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449239   66641 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.849035   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849069   66641 pod_ready.go:81] duration metric: took 399.822211ms for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.849080   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849087   66641 pod_ready.go:38] duration metric: took 1.307085242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:16.849102   66641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:16.860322   66641 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:16.860344   66641 kubeadm.go:597] duration metric: took 9.037689802s to restartPrimaryControlPlane
	I0723 15:21:16.860353   66641 kubeadm.go:394] duration metric: took 9.086749188s to StartCluster
	I0723 15:21:16.860368   66641 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.860445   66641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:16.862706   66641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.863010   66641 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:16.863105   66641 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:16.863162   66641 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863183   66641 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863194   66641 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863201   66641 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:16.863202   66641 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863218   66641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-911217"
	I0723 15:21:16.863225   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863235   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:16.863261   66641 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863272   66641 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:16.863304   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863517   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863547   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863553   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863566   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863584   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863612   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.864773   66641 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:16.866155   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:16.879697   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0723 15:21:16.880186   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.880765   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.880786   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.881122   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.881681   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.881712   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.882675   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0723 15:21:16.883162   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.883709   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.883730   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.883748   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0723 15:21:16.884082   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.884138   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.884609   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.884639   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.884610   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.884699   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.885040   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.885254   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.888611   66641 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.888627   66641 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:16.888651   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.888916   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.888944   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.899013   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0723 15:21:16.899458   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.900188   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.900208   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.900593   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.900786   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.902589   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0723 15:21:16.903091   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.903189   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.904095   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.904118   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.904576   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.904810   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.905242   66641 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:16.905443   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0723 15:21:16.905849   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.906358   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.906375   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.906491   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:16.906512   66641 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:16.906533   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.906766   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.906920   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.907374   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.907409   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.909637   66641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:16.910635   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911126   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.911154   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.911534   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.911683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.911859   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.913408   66641 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:16.913435   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:16.913456   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.916884   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.917338   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917647   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.917896   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.918061   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.918207   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.930880   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0723 15:21:16.931386   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.931925   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.931951   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.932292   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.932495   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.934404   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.934645   66641 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:16.934659   66641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:16.934675   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.937624   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.937991   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.938013   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.938166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.938342   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.938523   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.938695   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:13.407459   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:15.906352   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:17.068411   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:17.084266   66641 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:17.189089   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:17.189118   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:17.205584   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:17.205623   66641 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:17.209103   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:17.224264   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:17.245125   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:17.245152   66641 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:17.272564   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:18.245078   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020778604s)
	I0723 15:21:18.245165   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036025141s)
	I0723 15:21:18.245186   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245195   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245209   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245213   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245201   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245513   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245526   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245543   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245550   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245633   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245648   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245657   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245665   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245682   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245695   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245703   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245723   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245842   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245872   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245903   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245911   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245928   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245932   66641 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-911217"
	I0723 15:21:18.245982   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245987   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.246004   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.251643   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.251660   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.251879   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.251889   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.253737   66641 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:19.235665   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236110   64842 main.go:141] libmachine: (no-preload-543029) Found IP for machine: 192.168.72.227
	I0723 15:21:19.236141   64842 main.go:141] libmachine: (no-preload-543029) Reserving static IP address...
	I0723 15:21:19.236154   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has current primary IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236541   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.236571   64842 main.go:141] libmachine: (no-preload-543029) DBG | skip adding static IP to network mk-no-preload-543029 - found existing host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"}
	I0723 15:21:19.236586   64842 main.go:141] libmachine: (no-preload-543029) Reserved static IP address: 192.168.72.227
	I0723 15:21:19.236601   64842 main.go:141] libmachine: (no-preload-543029) Waiting for SSH to be available...
	I0723 15:21:19.236613   64842 main.go:141] libmachine: (no-preload-543029) DBG | Getting to WaitForSSH function...
	I0723 15:21:19.239149   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239453   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.239481   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239620   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH client type: external
	I0723 15:21:19.239651   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa (-rw-------)
	I0723 15:21:19.239677   64842 main.go:141] libmachine: (no-preload-543029) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:19.239691   64842 main.go:141] libmachine: (no-preload-543029) DBG | About to run SSH command:
	I0723 15:21:19.239700   64842 main.go:141] libmachine: (no-preload-543029) DBG | exit 0
	I0723 15:21:19.366227   64842 main.go:141] libmachine: (no-preload-543029) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:19.366646   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetConfigRaw
	I0723 15:21:19.367309   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.370038   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370401   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.370430   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370756   64842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/config.json ...
	I0723 15:21:19.370949   64842 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:19.370966   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:19.371186   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.373506   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.373912   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.373977   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.374053   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.374259   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374465   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374635   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.374805   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.374996   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.375009   64842 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:19.482523   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:19.482551   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482771   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:21:19.482796   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482975   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.485520   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.485868   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.485898   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.486084   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.486300   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486483   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486634   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.486777   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.486998   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.487019   64842 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-543029 && echo "no-preload-543029" | sudo tee /etc/hostname
	I0723 15:21:19.609064   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-543029
	
	I0723 15:21:19.609100   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.611746   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612087   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.612133   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612276   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.612477   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612663   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612845   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.612979   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.613158   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.613180   64842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-543029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-543029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-543029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:19.731696   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:19.731721   64842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:19.731740   64842 buildroot.go:174] setting up certificates
	I0723 15:21:19.731748   64842 provision.go:84] configureAuth start
	I0723 15:21:19.731755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.732051   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.735016   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.735425   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735608   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.737908   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738267   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.738317   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738482   64842 provision.go:143] copyHostCerts
	I0723 15:21:19.738556   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:19.738571   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:19.738641   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:19.738746   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:19.738755   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:19.738779   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:19.738852   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:19.738866   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:19.738887   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:19.738965   64842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.no-preload-543029 san=[127.0.0.1 192.168.72.227 localhost minikube no-preload-543029]
	I0723 15:21:20.020845   64842 provision.go:177] copyRemoteCerts
	I0723 15:21:20.020921   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:20.020954   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.023907   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024341   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.024363   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024531   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.024799   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.024973   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.025138   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.113238   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:20.136690   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 15:21:20.161178   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:21:20.184741   64842 provision.go:87] duration metric: took 452.982716ms to configureAuth
	I0723 15:21:20.184767   64842 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:20.184992   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:20.185076   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.187893   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188209   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.188235   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188473   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.188684   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.188883   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.189026   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.189181   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.189379   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.189397   64842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:17.263163   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.762332   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.263184   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.762413   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.263050   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.762396   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.263052   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.763027   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.263244   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.762584   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.255042   66641 addons.go:510] duration metric: took 1.391938603s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:19.089229   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:21.587960   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:20.463609   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:20.463657   64842 machine.go:97] duration metric: took 1.092694849s to provisionDockerMachine
	I0723 15:21:20.463670   64842 start.go:293] postStartSetup for "no-preload-543029" (driver="kvm2")
	I0723 15:21:20.463684   64842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:20.463705   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.464063   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:20.464093   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.467027   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.467429   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467606   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.467785   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.467938   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.468096   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.556442   64842 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:20.561477   64842 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:20.561506   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:20.561590   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:20.561694   64842 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:20.561814   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:20.574431   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:20.603531   64842 start.go:296] duration metric: took 139.847057ms for postStartSetup
	I0723 15:21:20.603578   64842 fix.go:56] duration metric: took 18.836315993s for fixHost
	I0723 15:21:20.603644   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.606820   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607184   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.607230   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607410   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.607660   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607851   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607999   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.608191   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.608373   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.608383   64842 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:20.718722   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748080.694505305
	
	I0723 15:21:20.718755   64842 fix.go:216] guest clock: 1721748080.694505305
	I0723 15:21:20.718764   64842 fix.go:229] Guest: 2024-07-23 15:21:20.694505305 +0000 UTC Remote: 2024-07-23 15:21:20.603582679 +0000 UTC m=+365.240688683 (delta=90.922626ms)
	I0723 15:21:20.718796   64842 fix.go:200] guest clock delta is within tolerance: 90.922626ms
	I0723 15:21:20.718801   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 18.9515773s
	I0723 15:21:20.718818   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.719088   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:20.721851   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722269   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.722292   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722527   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723046   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723231   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723328   64842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:20.723377   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.723460   64842 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:20.723485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.726596   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.726987   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727022   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727041   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727142   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727329   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.727475   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727498   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.727510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727638   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727707   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.728003   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.728170   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.728341   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.841462   64842 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:20.847787   64842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:21:20.998310   64842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:21.004048   64842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:21.004125   64842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:21.019676   64842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:21.019699   64842 start.go:495] detecting cgroup driver to use...
	I0723 15:21:21.019773   64842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:21.034888   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:21.049886   64842 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:21.049949   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:21.063974   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:21.077306   64842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:21.195936   64842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:21.355002   64842 docker.go:233] disabling docker service ...
	I0723 15:21:21.355090   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:21.370421   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:21.382910   64842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:21.493040   64842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:21.610670   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:21.623845   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:21.641461   64842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:21:21.641518   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.651025   64842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:21.651096   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.661449   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.671431   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.681681   64842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:21.692696   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.702592   64842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.720041   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.730075   64842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:21.739621   64842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:21.739686   64842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:21.752036   64842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:21.761412   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:21.902842   64842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:22.032458   64842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:22.032545   64842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:22.037229   64842 start.go:563] Will wait 60s for crictl version
	I0723 15:21:22.037309   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.040918   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:22.081102   64842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:22.081203   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.111862   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.140842   64842 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:21:18.404301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:20.406322   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.406365   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.142110   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:22.144996   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145342   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:22.145382   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145651   64842 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:22.149630   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:22.161308   64842 kubeadm.go:883] updating cluster {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:22.161457   64842 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:21:22.161507   64842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:22.196099   64842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0723 15:21:22.196122   64842 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:21:22.196180   64842 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.196197   64842 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.196257   64842 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0723 15:21:22.196270   64842 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.196280   64842 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.196391   64842 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.196430   64842 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.196256   64842 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.197600   64842 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197611   64842 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.197612   64842 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.197603   64842 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.197632   64842 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.197855   64842 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0723 15:21:22.453013   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.456128   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.457426   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.457660   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.468840   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.488855   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0723 15:21:22.498800   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.521182   64842 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0723 15:21:22.521236   64842 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.521282   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.606761   64842 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0723 15:21:22.606814   64842 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.606863   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626104   64842 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0723 15:21:22.626139   64842 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0723 15:21:22.626148   64842 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.626171   64842 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626405   64842 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0723 15:21:22.626436   64842 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.626497   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.739834   64842 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0723 15:21:22.739888   64842 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.739923   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.739972   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.739931   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.740025   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.740028   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.740087   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.754758   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.903466   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0723 15:21:22.903526   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903582   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.903618   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903475   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903669   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903725   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903738   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903808   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0723 15:21:22.903870   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:22.903977   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.904112   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.916856   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0723 15:21:22.916880   64842 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.916927   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.917993   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918778   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918818   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918846   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0723 15:21:22.918919   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0723 15:21:23.126109   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916361   64842 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.790200633s)
	I0723 15:21:24.916416   64842 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0723 15:21:24.916450   64842 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916477   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.999519999s)
	I0723 15:21:24.916501   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:24.916502   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0723 15:21:24.916528   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.916570   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.921489   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.262373   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.762746   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.263229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.763195   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.262446   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.762506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.262490   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.263073   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.762900   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.087763   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:24.588088   66641 node_ready.go:49] node "default-k8s-diff-port-911217" has status "Ready":"True"
	I0723 15:21:24.588115   66641 node_ready.go:38] duration metric: took 7.503814941s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:24.588126   66641 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:24.593658   66641 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598755   66641 pod_ready.go:92] pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:24.598780   66641 pod_ready.go:81] duration metric: took 5.095349ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598792   66641 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:26.605401   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:24.906330   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:26.906460   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:27.393601   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.477002958s)
	I0723 15:21:27.393621   64842 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.472105782s)
	I0723 15:21:27.393640   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0723 15:21:27.393664   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393665   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 15:21:27.393707   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393763   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:29.040178   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.646445558s)
	I0723 15:21:29.040216   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0723 15:21:29.040222   64842 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.64643284s)
	I0723 15:21:29.040248   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0723 15:21:29.040252   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:29.040316   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:27.262530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.762666   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.262506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.762908   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.262943   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.763041   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.263200   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.762855   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.262991   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.605685   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:29.107082   66641 pod_ready.go:92] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.107106   66641 pod_ready.go:81] duration metric: took 4.508306433s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.107117   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112506   66641 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.112529   66641 pod_ready.go:81] duration metric: took 5.405596ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112564   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117710   66641 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.117736   66641 pod_ready.go:81] duration metric: took 5.161856ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117748   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122182   66641 pod_ready.go:92] pod "kube-proxy-d4zwd" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.122207   66641 pod_ready.go:81] duration metric: took 4.450531ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122218   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126407   66641 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.126428   66641 pod_ready.go:81] duration metric: took 4.201792ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126439   66641 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:31.133392   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:28.967873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.404672   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.100302   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.059957757s)
	I0723 15:21:31.100343   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0723 15:21:31.100373   64842 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:31.100425   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:34.291526   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.191073801s)
	I0723 15:21:34.291561   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0723 15:21:34.291588   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:34.291639   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:32.262345   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.762530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.262472   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.763055   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.262344   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.762962   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.262594   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.762498   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.263210   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.763229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.631906   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.632672   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:33.405404   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.906326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.650341   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358679252s)
	I0723 15:21:35.650368   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0723 15:21:35.650412   64842 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:35.650450   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:36.307948   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0723 15:21:36.307992   64842 cache_images.go:123] Successfully loaded all cached images
	I0723 15:21:36.307999   64842 cache_images.go:92] duration metric: took 14.11186471s to LoadCachedImages
	I0723 15:21:36.308012   64842 kubeadm.go:934] updating node { 192.168.72.227 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:21:36.308139   64842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-543029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:36.308223   64842 ssh_runner.go:195] Run: crio config
	I0723 15:21:36.353489   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:36.353510   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:36.353521   64842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:36.353549   64842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-543029 NodeName:no-preload-543029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:36.353706   64842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-543029"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:36.353774   64842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:21:36.363814   64842 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:36.363887   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:36.372484   64842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0723 15:21:36.388450   64842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:21:36.404404   64842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0723 15:21:36.420801   64842 ssh_runner.go:195] Run: grep 192.168.72.227	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:36.424596   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:36.436558   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:36.563903   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:36.580045   64842 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029 for IP: 192.168.72.227
	I0723 15:21:36.580108   64842 certs.go:194] generating shared ca certs ...
	I0723 15:21:36.580133   64842 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:36.580339   64842 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:36.580409   64842 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:36.580423   64842 certs.go:256] generating profile certs ...
	I0723 15:21:36.580538   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.key
	I0723 15:21:36.580633   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key.1fcf66d2
	I0723 15:21:36.580678   64842 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key
	I0723 15:21:36.580818   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:36.580856   64842 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:36.580866   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:36.580899   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:36.580934   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:36.580968   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:36.581017   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:36.581890   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:36.617903   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:36.650101   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:36.690040   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:36.716216   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 15:21:36.740583   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:21:36.764801   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:36.798418   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:36.821594   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:36.843862   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:36.866577   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:36.888178   64842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:36.903980   64842 ssh_runner.go:195] Run: openssl version
	I0723 15:21:36.910344   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:36.920792   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925317   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925372   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.931375   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:36.941782   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:36.952943   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957594   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957643   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.963465   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:36.974471   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:36.984631   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989126   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989180   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.994580   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:37.004372   64842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:37.009492   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:37.016189   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:37.023648   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:37.030369   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:37.036358   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:37.042504   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:37.048396   64842 kubeadm.go:392] StartCluster: {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:37.048473   64842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:37.048542   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.085642   64842 cri.go:89] found id: ""
	I0723 15:21:37.085711   64842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:37.095789   64842 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:37.095809   64842 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:37.095861   64842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:37.105817   64842 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:37.106841   64842 kubeconfig.go:125] found "no-preload-543029" server: "https://192.168.72.227:8443"
	I0723 15:21:37.109115   64842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:37.118333   64842 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.227
	I0723 15:21:37.118365   64842 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:37.118389   64842 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:37.118442   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.160393   64842 cri.go:89] found id: ""
	I0723 15:21:37.160465   64842 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:37.175866   64842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:37.184719   64842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:37.184737   64842 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:37.184796   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:21:37.192836   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:37.192893   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:37.201472   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:21:37.209448   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:37.209509   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:37.217692   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.225746   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:37.225792   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.234312   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:21:37.242796   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:37.242853   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:37.251655   64842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:37.260393   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:37.372906   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.228191   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.438949   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.503088   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.588692   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:38.588787   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.089205   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.589266   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.609653   64842 api_server.go:72] duration metric: took 1.020961559s to wait for apiserver process to appear ...
	I0723 15:21:39.609681   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:39.609703   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:39.610233   64842 api_server.go:269] stopped: https://192.168.72.227:8443/healthz: Get "https://192.168.72.227:8443/healthz": dial tcp 192.168.72.227:8443: connect: connection refused
	I0723 15:21:40.110036   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:37.263268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.763001   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.263263   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.762567   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.262510   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.762366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.263091   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.762546   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.263115   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.762511   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.133459   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.634011   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:38.405042   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.405301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.406499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.755036   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.755081   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:42.755102   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:42.774722   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.774753   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:43.110105   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.114521   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.114549   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:43.610681   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.619976   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.620012   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:44.110574   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:44.117164   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:21:44.125459   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:21:44.125487   64842 api_server.go:131] duration metric: took 4.515798224s to wait for apiserver health ...
	I0723 15:21:44.125500   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:44.125508   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:44.127031   64842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:44.128250   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:44.156441   64842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:44.190002   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:44.202487   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:44.202543   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:44.202558   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:44.202570   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:44.202580   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:44.202597   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:44.202611   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:44.202623   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:44.202635   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:44.202649   64842 system_pods.go:74] duration metric: took 12.618106ms to wait for pod list to return data ...
	I0723 15:21:44.202663   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:44.208561   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:44.208598   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:44.208613   64842 node_conditions.go:105] duration metric: took 5.939597ms to run NodePressure ...
	I0723 15:21:44.208637   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:44.527115   64842 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531381   64842 kubeadm.go:739] kubelet initialised
	I0723 15:21:44.531403   64842 kubeadm.go:740] duration metric: took 4.261609ms waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531410   64842 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:44.536741   64842 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.542345   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542367   64842 pod_ready.go:81] duration metric: took 5.603228ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.542376   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542409   64842 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.547170   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547202   64842 pod_ready.go:81] duration metric: took 4.783034ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.547214   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547223   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.552220   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552239   64842 pod_ready.go:81] duration metric: took 5.010275ms for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.552247   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552252   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.593233   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593263   64842 pod_ready.go:81] duration metric: took 41.002989ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.593275   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593284   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.993527   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993556   64842 pod_ready.go:81] duration metric: took 400.24962ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.993567   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993575   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.393187   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393215   64842 pod_ready.go:81] duration metric: took 399.632229ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.393224   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393230   64842 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.794005   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794039   64842 pod_ready.go:81] duration metric: took 400.798877ms for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.794050   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794061   64842 pod_ready.go:38] duration metric: took 1.262643249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:45.794082   64842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:45.806575   64842 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:45.806604   64842 kubeadm.go:597] duration metric: took 8.710787698s to restartPrimaryControlPlane
	I0723 15:21:45.806616   64842 kubeadm.go:394] duration metric: took 8.758224212s to StartCluster
	I0723 15:21:45.806636   64842 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.806714   64842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:45.808707   64842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.808950   64842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:45.809024   64842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:45.809108   64842 addons.go:69] Setting storage-provisioner=true in profile "no-preload-543029"
	I0723 15:21:45.809121   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:45.809144   64842 addons.go:234] Setting addon storage-provisioner=true in "no-preload-543029"
	I0723 15:21:45.809148   64842 addons.go:69] Setting default-storageclass=true in profile "no-preload-543029"
	I0723 15:21:45.809158   64842 addons.go:69] Setting metrics-server=true in profile "no-preload-543029"
	I0723 15:21:45.809186   64842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-543029"
	I0723 15:21:45.809198   64842 addons.go:234] Setting addon metrics-server=true in "no-preload-543029"
	W0723 15:21:45.809207   64842 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:45.809233   64842 host.go:66] Checking if "no-preload-543029" exists ...
	W0723 15:21:45.809156   64842 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:45.809298   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.809533   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809566   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809615   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809650   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809666   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809694   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.810889   64842 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:45.812166   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:45.825877   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0723 15:21:45.826459   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.826873   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0723 15:21:45.827091   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827122   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.827302   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.827520   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.827785   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827809   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.828045   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.828076   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.828197   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.828404   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.828464   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0723 15:21:45.829160   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.829594   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.829617   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.830024   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.830679   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.830726   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.832633   64842 addons.go:234] Setting addon default-storageclass=true in "no-preload-543029"
	W0723 15:21:45.832654   64842 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:45.832683   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.833024   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.833067   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.848944   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0723 15:21:45.849974   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.850455   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0723 15:21:45.850916   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.850938   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.851135   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.851254   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.851443   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.852354   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.852373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.852472   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0723 15:21:45.852797   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.853534   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.853613   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.853820   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.854337   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.854373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.854866   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.855572   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.855606   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.855642   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.855829   64842 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:45.857645   64842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:45.857658   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:45.857676   64842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:45.857695   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:42.262868   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.762469   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.262898   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.762342   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.262359   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.763149   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.263062   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.763109   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.262592   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.763170   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.132245   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.633648   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.859112   64842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:45.859130   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:45.859146   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.861510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862069   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.862099   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862362   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.862596   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.862842   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.863077   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.863162   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.864192   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.864223   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.864257   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.864446   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.864602   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.864750   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.901172   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0723 15:21:45.901604   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.902073   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.902096   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.902455   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.902711   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.904749   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.905713   64842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:45.905736   64842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:45.905755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.909130   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909598   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.909655   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909882   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.910025   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.910171   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.910413   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:46.014049   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:46.040760   64842 node_ready.go:35] waiting up to 6m0s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:46.115180   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:46.144610   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:46.144632   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:46.164354   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:46.181905   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:46.181929   64842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:46.241734   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:46.241764   64842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:46.267086   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:47.396441   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.281225615s)
	I0723 15:21:47.396460   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232072139s)
	I0723 15:21:47.396498   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396512   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396529   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396544   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129426841s)
	I0723 15:21:47.396591   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396611   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396879   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396894   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396904   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396912   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396927   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396948   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396958   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396973   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.397067   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397093   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397113   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397120   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397310   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397326   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397335   64842 addons.go:475] Verifying addon metrics-server=true in "no-preload-543029"
	I0723 15:21:47.398473   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398488   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.398497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.398504   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.398766   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398788   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.398805   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.420728   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.420747   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.421047   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.421067   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.423038   64842 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:44.409201   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:46.905099   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:47.424285   64842 addons.go:510] duration metric: took 1.615264126s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:48.044800   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:47.262743   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.762500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.262636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.762397   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.262912   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.763274   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.262631   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.762560   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.262984   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.763131   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:51.763218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:51.804139   65605 cri.go:89] found id: ""
	I0723 15:21:51.804167   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.804177   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:51.804185   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:51.804246   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:51.846025   65605 cri.go:89] found id: ""
	I0723 15:21:51.846052   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.846064   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:51.846070   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:51.846133   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:48.132371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.133097   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:49.405318   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.907543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.545198   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:53.045065   64842 node_ready.go:49] node "no-preload-543029" has status "Ready":"True"
	I0723 15:21:53.045092   64842 node_ready.go:38] duration metric: took 7.004300565s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:53.045103   64842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:53.051631   64842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056333   64842 pod_ready.go:92] pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.056391   64842 pod_ready.go:81] duration metric: took 4.723453ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056428   64842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061634   64842 pod_ready.go:92] pod "etcd-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.061654   64842 pod_ready.go:81] duration metric: took 5.217288ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061666   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:55.068882   64842 pod_ready.go:102] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.885398   65605 cri.go:89] found id: ""
	I0723 15:21:51.885431   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.885442   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:51.885450   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:51.885514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:51.919587   65605 cri.go:89] found id: ""
	I0723 15:21:51.919618   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.919630   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:51.919637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:51.919723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:51.955301   65605 cri.go:89] found id: ""
	I0723 15:21:51.955335   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.955342   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:51.955348   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:51.955397   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:51.988318   65605 cri.go:89] found id: ""
	I0723 15:21:51.988345   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.988355   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:51.988362   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:51.988419   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:52.023375   65605 cri.go:89] found id: ""
	I0723 15:21:52.023407   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.023418   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:52.023426   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:52.023498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:52.060183   65605 cri.go:89] found id: ""
	I0723 15:21:52.060205   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.060212   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:52.060221   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:52.060233   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:52.109904   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:52.109937   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:52.123292   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:52.123317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:52.253361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:52.253386   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:52.253401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:52.321684   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:52.321720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:54.859846   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:54.873167   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:54.873233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:54.909330   65605 cri.go:89] found id: ""
	I0723 15:21:54.909351   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.909359   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:54.909364   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:54.909412   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:54.943092   65605 cri.go:89] found id: ""
	I0723 15:21:54.943120   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.943131   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:54.943138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:54.943198   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:54.975051   65605 cri.go:89] found id: ""
	I0723 15:21:54.975080   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.975090   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:54.975098   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:54.975172   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:55.017552   65605 cri.go:89] found id: ""
	I0723 15:21:55.017580   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.017590   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:55.017596   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:55.017657   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:55.067857   65605 cri.go:89] found id: ""
	I0723 15:21:55.067887   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.067897   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:55.067903   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:55.067965   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:55.105194   65605 cri.go:89] found id: ""
	I0723 15:21:55.105224   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.105234   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:55.105242   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:55.105312   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:55.174421   65605 cri.go:89] found id: ""
	I0723 15:21:55.174451   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.174463   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:55.174470   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:55.174521   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:55.209007   65605 cri.go:89] found id: ""
	I0723 15:21:55.209032   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.209039   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:55.209048   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:55.209059   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:55.261075   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:55.261110   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:55.273629   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:55.273656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:55.348214   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:55.348237   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:55.348271   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:55.418341   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:55.418371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:52.134201   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.633089   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.405215   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.405377   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.068263   64842 pod_ready.go:92] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.068285   64842 pod_ready.go:81] duration metric: took 3.006610636s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.068294   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073245   64842 pod_ready.go:92] pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.073267   64842 pod_ready.go:81] duration metric: took 4.962522ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073275   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078816   64842 pod_ready.go:92] pod "kube-proxy-wzbps" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.078835   64842 pod_ready.go:81] duration metric: took 5.554703ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078843   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646678   64842 pod_ready.go:92] pod "kube-scheduler-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.646709   64842 pod_ready.go:81] duration metric: took 567.858812ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646722   64842 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:58.653962   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:57.956565   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:57.969980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:57.970054   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:58.002894   65605 cri.go:89] found id: ""
	I0723 15:21:58.002925   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.002943   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:58.002951   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:58.003018   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:58.034980   65605 cri.go:89] found id: ""
	I0723 15:21:58.035007   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.035017   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:58.035024   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:58.035090   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:58.068666   65605 cri.go:89] found id: ""
	I0723 15:21:58.068694   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.068702   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:58.068708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:58.068757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:58.102693   65605 cri.go:89] found id: ""
	I0723 15:21:58.102727   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.102737   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:58.102744   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:58.102807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:58.137492   65605 cri.go:89] found id: ""
	I0723 15:21:58.137521   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.137530   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:58.137535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:58.137590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:58.173616   65605 cri.go:89] found id: ""
	I0723 15:21:58.173640   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.173647   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:58.173654   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:58.173716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:58.206995   65605 cri.go:89] found id: ""
	I0723 15:21:58.207023   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.207033   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:58.207040   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:58.207100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:58.238476   65605 cri.go:89] found id: ""
	I0723 15:21:58.238504   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.238513   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:58.238525   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:58.238538   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:58.291074   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:58.291104   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:58.305305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:58.305349   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:58.379551   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:58.379572   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:58.379587   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:58.453253   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:58.453293   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:00.994715   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:01.010264   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:01.010359   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:01.065402   65605 cri.go:89] found id: ""
	I0723 15:22:01.065433   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.065443   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:01.065451   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:01.065511   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:01.115626   65605 cri.go:89] found id: ""
	I0723 15:22:01.115655   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.115666   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:01.115675   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:01.115737   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:01.155568   65605 cri.go:89] found id: ""
	I0723 15:22:01.155595   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.155604   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:01.155610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:01.155674   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:01.191076   65605 cri.go:89] found id: ""
	I0723 15:22:01.191102   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.191110   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:01.191116   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:01.191162   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:01.224233   65605 cri.go:89] found id: ""
	I0723 15:22:01.224257   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.224263   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:01.224269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:01.224337   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:01.257321   65605 cri.go:89] found id: ""
	I0723 15:22:01.257344   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.257351   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:01.257357   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:01.257415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:01.289646   65605 cri.go:89] found id: ""
	I0723 15:22:01.289670   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.289678   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:01.289685   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:01.289740   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:01.322672   65605 cri.go:89] found id: ""
	I0723 15:22:01.322703   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.322714   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:01.322725   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:01.322741   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:01.395637   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:01.395674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:01.434548   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:01.434580   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:01.484364   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:01.484396   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:01.497536   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:01.497571   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:01.567570   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:57.132119   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:59.132178   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.134156   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:58.407847   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:00.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.161116   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.658640   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:04.068561   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:04.082660   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:04.082738   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:04.118536   65605 cri.go:89] found id: ""
	I0723 15:22:04.118566   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.118576   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:04.118584   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:04.118642   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:04.154768   65605 cri.go:89] found id: ""
	I0723 15:22:04.154792   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.154802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:04.154809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:04.154854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:04.188426   65605 cri.go:89] found id: ""
	I0723 15:22:04.188456   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.188464   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:04.188469   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:04.188517   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:04.222195   65605 cri.go:89] found id: ""
	I0723 15:22:04.222221   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.222229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:04.222251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:04.222327   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:04.259164   65605 cri.go:89] found id: ""
	I0723 15:22:04.259191   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.259201   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:04.259208   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:04.259275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:04.291500   65605 cri.go:89] found id: ""
	I0723 15:22:04.291527   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.291534   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:04.291541   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:04.291595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:04.326680   65605 cri.go:89] found id: ""
	I0723 15:22:04.326712   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.326722   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:04.326729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:04.326789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:04.358629   65605 cri.go:89] found id: ""
	I0723 15:22:04.358653   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.358662   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:04.358671   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:04.358682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:04.429591   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.429614   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:04.429625   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:04.509841   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:04.509887   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:04.547827   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:04.547852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:04.600857   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:04.600891   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:03.633501   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.633691   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.404413   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.404840   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.405499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:06.153755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:08.653890   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.116541   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:07.129739   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:07.129809   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:07.164541   65605 cri.go:89] found id: ""
	I0723 15:22:07.164573   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.164583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:07.164589   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:07.164651   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:07.202567   65605 cri.go:89] found id: ""
	I0723 15:22:07.202595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.202606   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:07.202613   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:07.202672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:07.238665   65605 cri.go:89] found id: ""
	I0723 15:22:07.238689   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.238698   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:07.238706   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:07.238763   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:07.271216   65605 cri.go:89] found id: ""
	I0723 15:22:07.271246   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.271256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:07.271263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:07.271335   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:07.303566   65605 cri.go:89] found id: ""
	I0723 15:22:07.303595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.303606   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:07.303613   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:07.303672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:07.337927   65605 cri.go:89] found id: ""
	I0723 15:22:07.337951   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.337959   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:07.337965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:07.338023   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:07.373813   65605 cri.go:89] found id: ""
	I0723 15:22:07.373841   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.373852   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:07.373860   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:07.373928   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:07.408301   65605 cri.go:89] found id: ""
	I0723 15:22:07.408326   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.408333   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:07.408340   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:07.408350   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:07.488384   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:07.488417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.531867   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:07.531895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:07.582639   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:07.582671   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.597387   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:07.597413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:07.673185   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.173915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:10.186657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:10.186717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:10.218213   65605 cri.go:89] found id: ""
	I0723 15:22:10.218238   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.218246   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:10.218252   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:10.218302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:10.250199   65605 cri.go:89] found id: ""
	I0723 15:22:10.250228   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.250238   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:10.250245   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:10.250307   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:10.282920   65605 cri.go:89] found id: ""
	I0723 15:22:10.282947   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.282957   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:10.282965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:10.283022   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:10.317334   65605 cri.go:89] found id: ""
	I0723 15:22:10.317363   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.317372   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:10.317380   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:10.317443   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:10.350520   65605 cri.go:89] found id: ""
	I0723 15:22:10.350548   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.350559   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:10.350566   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:10.350630   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:10.381360   65605 cri.go:89] found id: ""
	I0723 15:22:10.381385   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.381392   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:10.381405   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:10.381451   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:10.413202   65605 cri.go:89] found id: ""
	I0723 15:22:10.413231   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.413239   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:10.413244   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:10.413300   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:10.447102   65605 cri.go:89] found id: ""
	I0723 15:22:10.447132   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.447143   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:10.447154   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:10.447168   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:10.496110   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:10.496141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:10.509298   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:10.509331   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:10.578938   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.578960   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:10.578975   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:10.660316   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:10.660346   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.634852   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.635205   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.905326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.906212   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.153941   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.652564   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.199119   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:13.212070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:13.212129   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:13.247646   65605 cri.go:89] found id: ""
	I0723 15:22:13.247683   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.247694   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:13.247701   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:13.247759   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:13.277875   65605 cri.go:89] found id: ""
	I0723 15:22:13.277901   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.277909   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:13.277918   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:13.277973   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:13.311499   65605 cri.go:89] found id: ""
	I0723 15:22:13.311520   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.311527   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:13.311533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:13.311587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:13.342913   65605 cri.go:89] found id: ""
	I0723 15:22:13.342944   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.342955   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:13.342963   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:13.343020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:13.380062   65605 cri.go:89] found id: ""
	I0723 15:22:13.380085   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.380092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:13.380097   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:13.380148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:13.416683   65605 cri.go:89] found id: ""
	I0723 15:22:13.416712   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.416721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:13.416728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:13.416786   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:13.451783   65605 cri.go:89] found id: ""
	I0723 15:22:13.451806   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.451813   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:13.451819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:13.451864   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:13.490456   65605 cri.go:89] found id: ""
	I0723 15:22:13.490488   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.490500   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:13.490512   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:13.490531   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:13.562391   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:13.562419   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:13.562435   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:13.639271   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:13.639330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.677457   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:13.677486   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:13.727877   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:13.727912   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.242569   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:16.255165   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:16.255237   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:16.286884   65605 cri.go:89] found id: ""
	I0723 15:22:16.286973   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.286990   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:16.286998   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:16.287070   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:16.319480   65605 cri.go:89] found id: ""
	I0723 15:22:16.319508   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.319518   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:16.319524   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:16.319590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:16.356142   65605 cri.go:89] found id: ""
	I0723 15:22:16.356176   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.356186   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:16.356193   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:16.356251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:16.393720   65605 cri.go:89] found id: ""
	I0723 15:22:16.393748   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.393756   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:16.393761   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:16.393817   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:16.429752   65605 cri.go:89] found id: ""
	I0723 15:22:16.429788   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.429800   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:16.429807   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:16.429865   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:16.463983   65605 cri.go:89] found id: ""
	I0723 15:22:16.464012   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.464023   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:16.464030   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:16.464099   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:16.497390   65605 cri.go:89] found id: ""
	I0723 15:22:16.497417   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.497428   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:16.497435   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:16.497496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:16.532460   65605 cri.go:89] found id: ""
	I0723 15:22:16.532491   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.532502   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:16.532513   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:16.532525   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:16.584455   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:16.584492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.599205   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:16.599237   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:16.672183   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:16.672207   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:16.672221   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:16.748888   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:16.748923   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:12.132681   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.134314   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.634068   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.404961   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.406911   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:15.652813   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:17.653585   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.654123   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.286407   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:19.300815   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:19.300890   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:19.341088   65605 cri.go:89] found id: ""
	I0723 15:22:19.341122   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.341133   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:19.341140   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:19.341191   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:19.375597   65605 cri.go:89] found id: ""
	I0723 15:22:19.375627   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.375635   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:19.375641   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:19.375689   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:19.412206   65605 cri.go:89] found id: ""
	I0723 15:22:19.412234   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.412244   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:19.412252   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:19.412315   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:19.445598   65605 cri.go:89] found id: ""
	I0723 15:22:19.445631   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.445645   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:19.445653   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:19.445725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:19.477766   65605 cri.go:89] found id: ""
	I0723 15:22:19.477800   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.477811   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:19.477818   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:19.477877   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:19.509935   65605 cri.go:89] found id: ""
	I0723 15:22:19.509965   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.509976   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:19.509982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:19.510039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:19.542906   65605 cri.go:89] found id: ""
	I0723 15:22:19.542936   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.542947   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:19.542954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:19.543010   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:19.575935   65605 cri.go:89] found id: ""
	I0723 15:22:19.575964   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.575975   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:19.576036   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:19.576054   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:19.625640   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:19.625674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:19.638938   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:19.638965   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:19.711019   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:19.711047   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:19.711061   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:19.787744   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:19.787781   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:19.133215   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.632570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:18.905104   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.404733   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.152487   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:24.154220   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.326500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:22.339677   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:22.339741   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:22.374593   65605 cri.go:89] found id: ""
	I0723 15:22:22.374630   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.374641   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:22.374649   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:22.374713   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:22.408064   65605 cri.go:89] found id: ""
	I0723 15:22:22.408089   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.408099   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:22.408106   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:22.408166   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:22.442923   65605 cri.go:89] found id: ""
	I0723 15:22:22.442956   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.442968   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:22.442976   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:22.443038   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:22.476003   65605 cri.go:89] found id: ""
	I0723 15:22:22.476027   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.476036   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:22.476043   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:22.476109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:22.508221   65605 cri.go:89] found id: ""
	I0723 15:22:22.508253   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.508260   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:22.508268   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:22.508328   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:22.540748   65605 cri.go:89] found id: ""
	I0723 15:22:22.540778   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.540789   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:22.540797   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:22.540857   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:22.576000   65605 cri.go:89] found id: ""
	I0723 15:22:22.576028   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.576038   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:22.576044   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:22.576102   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:22.614295   65605 cri.go:89] found id: ""
	I0723 15:22:22.614325   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.614335   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:22.614346   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:22.614361   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:22.627447   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:22.627481   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:22.701142   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:22.701172   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:22.701188   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:22.788487   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:22.788523   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.831107   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:22.831136   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.382886   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:25.396072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:25.396147   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:25.432414   65605 cri.go:89] found id: ""
	I0723 15:22:25.432443   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.432454   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:25.432482   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:25.432554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:25.466375   65605 cri.go:89] found id: ""
	I0723 15:22:25.466421   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.466429   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:25.466434   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:25.466488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:25.502512   65605 cri.go:89] found id: ""
	I0723 15:22:25.502536   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.502545   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:25.502553   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:25.502624   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:25.535953   65605 cri.go:89] found id: ""
	I0723 15:22:25.535975   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.535984   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:25.535991   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:25.536051   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:25.569217   65605 cri.go:89] found id: ""
	I0723 15:22:25.569250   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.569261   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:25.569269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:25.569331   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:25.602317   65605 cri.go:89] found id: ""
	I0723 15:22:25.602341   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.602350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:25.602360   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:25.602433   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:25.636959   65605 cri.go:89] found id: ""
	I0723 15:22:25.636984   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.636994   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:25.637001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:25.637059   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:25.671719   65605 cri.go:89] found id: ""
	I0723 15:22:25.671753   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.671764   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:25.671775   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:25.671789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.720509   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:25.720540   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:25.733097   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:25.733121   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:25.809365   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:25.809393   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:25.809409   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:25.890663   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:25.890700   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:23.634537   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.133073   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:23.905075   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:25.905102   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:27.905390   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.653893   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.660981   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.430884   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:28.444825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:28.444882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:28.477510   65605 cri.go:89] found id: ""
	I0723 15:22:28.477533   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.477540   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:28.477546   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:28.477611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:28.515395   65605 cri.go:89] found id: ""
	I0723 15:22:28.515424   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.515434   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:28.515440   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:28.515498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:28.554144   65605 cri.go:89] found id: ""
	I0723 15:22:28.554169   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.554176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:28.554185   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:28.554239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:28.588756   65605 cri.go:89] found id: ""
	I0723 15:22:28.588783   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.588794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:28.588801   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:28.588861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:28.623278   65605 cri.go:89] found id: ""
	I0723 15:22:28.623305   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.623313   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:28.623318   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:28.623372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:28.666802   65605 cri.go:89] found id: ""
	I0723 15:22:28.666831   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.666840   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:28.666847   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:28.666906   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:28.697712   65605 cri.go:89] found id: ""
	I0723 15:22:28.697736   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.697744   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:28.697749   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:28.697803   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:28.730296   65605 cri.go:89] found id: ""
	I0723 15:22:28.730333   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.730340   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:28.730349   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:28.730360   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.779381   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:28.779417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:28.792687   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:28.792718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:28.859483   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:28.859508   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:28.859537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:28.933792   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:28.933824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.474653   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:31.488537   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:31.488602   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:31.522785   65605 cri.go:89] found id: ""
	I0723 15:22:31.522816   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.522826   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:31.522834   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:31.522901   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:31.554448   65605 cri.go:89] found id: ""
	I0723 15:22:31.554493   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.554503   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:31.554508   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:31.554568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:31.587456   65605 cri.go:89] found id: ""
	I0723 15:22:31.587479   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.587486   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:31.587492   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:31.587549   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:31.625604   65605 cri.go:89] found id: ""
	I0723 15:22:31.625632   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.625640   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:31.625646   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:31.625696   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:31.661266   65605 cri.go:89] found id: ""
	I0723 15:22:31.661298   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.661304   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:31.661309   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:31.661364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:31.696942   65605 cri.go:89] found id: ""
	I0723 15:22:31.696974   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.696984   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:31.696992   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:31.697055   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:31.730706   65605 cri.go:89] found id: ""
	I0723 15:22:31.730730   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.730738   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:31.730743   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:31.730789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:31.762778   65605 cri.go:89] found id: ""
	I0723 15:22:31.762802   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.762810   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:31.762818   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:31.762829   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.804789   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:31.804814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:30.133732   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:29.906482   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:32.404579   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.152594   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:33.154059   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.854481   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:31.854514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:31.867003   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:31.867028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:31.942544   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:31.942565   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:31.942576   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.519437   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:34.531879   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:34.531941   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:34.565547   65605 cri.go:89] found id: ""
	I0723 15:22:34.565572   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.565580   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:34.565585   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:34.565634   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:34.597865   65605 cri.go:89] found id: ""
	I0723 15:22:34.597892   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.597902   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:34.597908   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:34.597968   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:34.633153   65605 cri.go:89] found id: ""
	I0723 15:22:34.633176   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.633185   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:34.633192   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:34.633251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:34.668464   65605 cri.go:89] found id: ""
	I0723 15:22:34.668486   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.668496   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:34.668502   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:34.668573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:34.700358   65605 cri.go:89] found id: ""
	I0723 15:22:34.700401   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.700412   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:34.700422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:34.700495   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:34.731774   65605 cri.go:89] found id: ""
	I0723 15:22:34.731807   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.731819   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:34.731828   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:34.731902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:34.764204   65605 cri.go:89] found id: ""
	I0723 15:22:34.764232   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.764243   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:34.764251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:34.764311   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:34.794103   65605 cri.go:89] found id: ""
	I0723 15:22:34.794131   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.794139   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:34.794149   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:34.794165   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:34.868038   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:34.868063   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:34.868076   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.958254   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:34.958291   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:35.004649   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:35.004681   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:35.055496   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:35.055537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:32.632017   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.634515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.405341   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:36.905094   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:35.652935   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.654130   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:40.153533   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.569938   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:37.582561   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:37.582629   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:37.613053   65605 cri.go:89] found id: ""
	I0723 15:22:37.613081   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.613090   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:37.613096   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:37.613161   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:37.649239   65605 cri.go:89] found id: ""
	I0723 15:22:37.649270   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.649279   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:37.649286   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:37.649372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:37.685110   65605 cri.go:89] found id: ""
	I0723 15:22:37.685137   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.685145   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:37.685150   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:37.685201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:37.718210   65605 cri.go:89] found id: ""
	I0723 15:22:37.718231   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.718239   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:37.718245   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:37.718297   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:37.751192   65605 cri.go:89] found id: ""
	I0723 15:22:37.751224   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.751234   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:37.751241   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:37.751294   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:37.781569   65605 cri.go:89] found id: ""
	I0723 15:22:37.781597   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.781607   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:37.781614   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:37.781680   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:37.812886   65605 cri.go:89] found id: ""
	I0723 15:22:37.812916   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.812927   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:37.812934   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:37.812994   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:37.844065   65605 cri.go:89] found id: ""
	I0723 15:22:37.844094   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.844104   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:37.844114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:37.844128   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.857216   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:37.857244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:37.926781   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:37.926807   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:37.926824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:38.007510   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:38.007544   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:38.045404   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:38.045437   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:40.594590   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:40.607099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:40.607157   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:40.660888   65605 cri.go:89] found id: ""
	I0723 15:22:40.660915   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.660926   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:40.660933   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:40.660992   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:40.698276   65605 cri.go:89] found id: ""
	I0723 15:22:40.698302   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.698310   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:40.698317   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:40.698411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:40.733515   65605 cri.go:89] found id: ""
	I0723 15:22:40.733542   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.733552   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:40.733560   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:40.733619   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:40.765501   65605 cri.go:89] found id: ""
	I0723 15:22:40.765530   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.765541   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:40.765548   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:40.765600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:40.800660   65605 cri.go:89] found id: ""
	I0723 15:22:40.800686   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.800693   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:40.800698   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:40.800744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:40.836084   65605 cri.go:89] found id: ""
	I0723 15:22:40.836111   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.836119   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:40.836125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:40.836179   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:40.872567   65605 cri.go:89] found id: ""
	I0723 15:22:40.872593   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.872601   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:40.872607   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:40.872665   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:40.907965   65605 cri.go:89] found id: ""
	I0723 15:22:40.907995   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.908006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:40.908017   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:40.908032   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:40.977078   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:40.977105   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:40.977124   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:41.059589   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:41.059634   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:41.097934   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:41.097968   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:41.151322   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:41.151365   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.133207   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.133345   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.633631   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.407087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.904675   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:42.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:44.653650   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.665956   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:43.678808   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:43.678882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:43.711311   65605 cri.go:89] found id: ""
	I0723 15:22:43.711346   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.711356   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:43.711363   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:43.711415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:43.745203   65605 cri.go:89] found id: ""
	I0723 15:22:43.745226   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.745233   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:43.745239   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:43.745303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:43.778815   65605 cri.go:89] found id: ""
	I0723 15:22:43.778851   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.778861   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:43.778868   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:43.778926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:43.812497   65605 cri.go:89] found id: ""
	I0723 15:22:43.812528   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.812538   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:43.812544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:43.812595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:43.849568   65605 cri.go:89] found id: ""
	I0723 15:22:43.849595   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.849607   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:43.849621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:43.849784   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:43.883486   65605 cri.go:89] found id: ""
	I0723 15:22:43.883515   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.883527   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:43.883535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:43.883603   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:43.917301   65605 cri.go:89] found id: ""
	I0723 15:22:43.917321   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.917328   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:43.917333   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:43.917388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:43.951808   65605 cri.go:89] found id: ""
	I0723 15:22:43.951835   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.951844   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:43.951853   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:43.951864   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:44.001416   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:44.001448   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:44.014680   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:44.014708   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:44.086008   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:44.086033   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:44.086048   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:44.174647   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:44.174679   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:46.716916   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:46.730403   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:46.730473   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:46.765297   65605 cri.go:89] found id: ""
	I0723 15:22:46.765332   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.765348   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:46.765355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:46.765417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:46.798193   65605 cri.go:89] found id: ""
	I0723 15:22:46.798225   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.798235   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:46.798242   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:46.798309   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:46.830977   65605 cri.go:89] found id: ""
	I0723 15:22:46.831003   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.831015   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:46.831022   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:46.831093   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:44.135515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.633440   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.404399   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.655329   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.660172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.867414   65605 cri.go:89] found id: ""
	I0723 15:22:46.867441   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.867452   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:46.867459   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:46.867524   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:46.903782   65605 cri.go:89] found id: ""
	I0723 15:22:46.903810   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.903823   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:46.903830   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:46.903912   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:46.936451   65605 cri.go:89] found id: ""
	I0723 15:22:46.936479   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.936486   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:46.936491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:46.936538   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:46.970263   65605 cri.go:89] found id: ""
	I0723 15:22:46.970289   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.970297   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:46.970302   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:46.970370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:47.005023   65605 cri.go:89] found id: ""
	I0723 15:22:47.005055   65605 logs.go:276] 0 containers: []
	W0723 15:22:47.005065   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:47.005074   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:47.005087   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:47.102350   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:47.102398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:47.102432   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:47.194243   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:47.194277   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:47.235510   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:47.235543   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:47.285177   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:47.285208   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:49.799825   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:49.813159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:49.813218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:49.844937   65605 cri.go:89] found id: ""
	I0723 15:22:49.844966   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.844974   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:49.844979   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:49.845039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:49.880236   65605 cri.go:89] found id: ""
	I0723 15:22:49.880265   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.880276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:49.880283   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:49.880344   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:49.914260   65605 cri.go:89] found id: ""
	I0723 15:22:49.914289   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.914298   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:49.914306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:49.914360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:49.948948   65605 cri.go:89] found id: ""
	I0723 15:22:49.948979   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.948987   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:49.948994   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:49.949049   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:49.982841   65605 cri.go:89] found id: ""
	I0723 15:22:49.982867   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.982876   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:49.982881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:49.982926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:50.018255   65605 cri.go:89] found id: ""
	I0723 15:22:50.018286   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.018297   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:50.018315   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:50.018366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:50.054476   65605 cri.go:89] found id: ""
	I0723 15:22:50.054505   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.054515   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:50.054521   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:50.054582   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:50.088017   65605 cri.go:89] found id: ""
	I0723 15:22:50.088050   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.088060   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:50.088072   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:50.088086   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:50.140460   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:50.140494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:50.155334   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:50.155371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:50.230361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:50.230401   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:50.230419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:50.307742   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:50.307789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:48.635238   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.133390   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.406535   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:50.904921   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.905910   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.152686   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:53.153547   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.847520   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:52.868334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:52.868400   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:52.905903   65605 cri.go:89] found id: ""
	I0723 15:22:52.905930   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.905941   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:52.905948   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:52.906006   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:52.940644   65605 cri.go:89] found id: ""
	I0723 15:22:52.940672   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.940683   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:52.940690   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:52.940752   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:52.973581   65605 cri.go:89] found id: ""
	I0723 15:22:52.973607   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.973615   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:52.973621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:52.973682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:53.007004   65605 cri.go:89] found id: ""
	I0723 15:22:53.007032   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.007040   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:53.007046   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:53.007100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:53.040346   65605 cri.go:89] found id: ""
	I0723 15:22:53.040374   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.040385   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:53.040392   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:53.040455   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:53.073620   65605 cri.go:89] found id: ""
	I0723 15:22:53.073653   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.073662   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:53.073668   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:53.073717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:53.108895   65605 cri.go:89] found id: ""
	I0723 15:22:53.108929   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.108941   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:53.108949   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:53.109014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:53.144145   65605 cri.go:89] found id: ""
	I0723 15:22:53.144171   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.144179   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:53.144190   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:53.144207   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:53.181580   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:53.181617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:53.235261   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:53.235292   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:53.249317   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:53.249352   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:53.317382   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:53.317403   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:53.317419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:55.899766   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:55.913612   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:55.913685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:55.945832   65605 cri.go:89] found id: ""
	I0723 15:22:55.945865   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.945877   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:55.945884   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:55.945939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:55.977489   65605 cri.go:89] found id: ""
	I0723 15:22:55.977522   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.977533   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:55.977546   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:55.977607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:56.011727   65605 cri.go:89] found id: ""
	I0723 15:22:56.011758   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.011770   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:56.011781   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:56.011850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:56.044046   65605 cri.go:89] found id: ""
	I0723 15:22:56.044076   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.044086   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:56.044093   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:56.044148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:56.078615   65605 cri.go:89] found id: ""
	I0723 15:22:56.078638   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.078644   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:56.078649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:56.078702   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:56.112720   65605 cri.go:89] found id: ""
	I0723 15:22:56.112746   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.112754   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:56.112759   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:56.112807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:56.146436   65605 cri.go:89] found id: ""
	I0723 15:22:56.146464   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.146475   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:56.146483   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:56.146545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:56.179819   65605 cri.go:89] found id: ""
	I0723 15:22:56.179850   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.179859   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:56.179868   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:56.179885   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:56.219608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:56.219636   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:56.268158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:56.268192   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:56.281422   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:56.281449   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:56.351169   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:56.351190   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:56.351206   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:53.133444   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.632360   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.404787   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.905423   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.652504   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.655049   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:58.933585   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:58.946516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:58.946607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:58.980970   65605 cri.go:89] found id: ""
	I0723 15:22:58.980994   65605 logs.go:276] 0 containers: []
	W0723 15:22:58.981004   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:58.981012   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:58.981083   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:59.019301   65605 cri.go:89] found id: ""
	I0723 15:22:59.019337   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.019352   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:59.019360   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:59.019417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:59.053653   65605 cri.go:89] found id: ""
	I0723 15:22:59.053677   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.053685   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:59.053690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:59.053745   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:59.086737   65605 cri.go:89] found id: ""
	I0723 15:22:59.086764   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.086772   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:59.086778   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:59.086833   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:59.120689   65605 cri.go:89] found id: ""
	I0723 15:22:59.120717   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.120725   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:59.120731   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:59.120793   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:59.157267   65605 cri.go:89] found id: ""
	I0723 15:22:59.157305   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.157313   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:59.157319   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:59.157370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:59.193432   65605 cri.go:89] found id: ""
	I0723 15:22:59.193457   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.193468   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:59.193474   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:59.193518   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:59.227501   65605 cri.go:89] found id: ""
	I0723 15:22:59.227528   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.227535   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:59.227544   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:59.227555   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:59.314420   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:59.314465   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:59.354311   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:59.354354   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:59.406158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:59.406189   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:59.419244   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:59.419270   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:59.494399   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:57.632469   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:00.133084   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.905483   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.406340   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.154105   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.655454   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:01.995403   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:02.008395   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:02.008459   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:02.041952   65605 cri.go:89] found id: ""
	I0723 15:23:02.041979   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.041989   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:02.041995   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:02.042061   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:02.079353   65605 cri.go:89] found id: ""
	I0723 15:23:02.079383   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.079390   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:02.079397   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:02.079453   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:02.114222   65605 cri.go:89] found id: ""
	I0723 15:23:02.114251   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.114261   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:02.114269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:02.114350   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:02.146563   65605 cri.go:89] found id: ""
	I0723 15:23:02.146591   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.146603   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:02.146610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:02.146675   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:02.184401   65605 cri.go:89] found id: ""
	I0723 15:23:02.184428   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.184436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:02.184442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:02.184489   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:02.221304   65605 cri.go:89] found id: ""
	I0723 15:23:02.221339   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.221350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:02.221358   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:02.221424   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:02.266255   65605 cri.go:89] found id: ""
	I0723 15:23:02.266280   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.266288   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:02.266308   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:02.266364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:02.302038   65605 cri.go:89] found id: ""
	I0723 15:23:02.302064   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.302075   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:02.302085   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:02.302102   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.352709   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:02.352743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:02.366113   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:02.366141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:02.433621   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:02.433658   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:02.433674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:02.512443   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:02.512479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.051227   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:05.063634   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:05.063704   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:05.099833   65605 cri.go:89] found id: ""
	I0723 15:23:05.099862   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.099872   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:05.099880   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:05.099942   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:05.136009   65605 cri.go:89] found id: ""
	I0723 15:23:05.136030   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.136036   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:05.136042   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:05.136089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:05.171390   65605 cri.go:89] found id: ""
	I0723 15:23:05.171423   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.171434   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:05.171441   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:05.171497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:05.210193   65605 cri.go:89] found id: ""
	I0723 15:23:05.210220   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.210229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:05.210236   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:05.210318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:05.243266   65605 cri.go:89] found id: ""
	I0723 15:23:05.243290   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.243298   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:05.243304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:05.243368   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:05.273795   65605 cri.go:89] found id: ""
	I0723 15:23:05.273826   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.273835   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:05.273842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:05.273918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:05.305498   65605 cri.go:89] found id: ""
	I0723 15:23:05.305521   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.305528   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:05.305533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:05.305587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:05.337867   65605 cri.go:89] found id: ""
	I0723 15:23:05.337894   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.337905   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:05.337917   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:05.337934   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:05.353531   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:05.353564   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:05.419605   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:05.419630   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:05.419644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:05.503361   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:05.503395   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.539514   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:05.539547   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.633357   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.633516   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.904960   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.913789   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.657437   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.660064   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.091151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:08.103930   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:08.104007   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:08.136853   65605 cri.go:89] found id: ""
	I0723 15:23:08.136874   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.136881   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:08.136887   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:08.136940   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:08.171525   65605 cri.go:89] found id: ""
	I0723 15:23:08.171556   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.171577   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:08.171584   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:08.171652   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:08.205887   65605 cri.go:89] found id: ""
	I0723 15:23:08.205919   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.205930   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:08.205940   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:08.206001   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:08.238304   65605 cri.go:89] found id: ""
	I0723 15:23:08.238329   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.238337   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:08.238342   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:08.238411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:08.270162   65605 cri.go:89] found id: ""
	I0723 15:23:08.270194   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.270203   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:08.270211   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:08.270273   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:08.312963   65605 cri.go:89] found id: ""
	I0723 15:23:08.312991   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.312999   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:08.313005   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:08.313065   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:08.345211   65605 cri.go:89] found id: ""
	I0723 15:23:08.345246   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.345258   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:08.345267   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:08.345326   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:08.381355   65605 cri.go:89] found id: ""
	I0723 15:23:08.381390   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.381399   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:08.381409   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:08.381421   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.436680   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:08.436718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:08.450210   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:08.450245   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:08.517469   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:08.517490   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:08.517504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:08.603147   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:08.603185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:11.142363   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:11.158204   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:11.158278   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:11.197181   65605 cri.go:89] found id: ""
	I0723 15:23:11.197211   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.197227   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:11.197234   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:11.197302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:11.232698   65605 cri.go:89] found id: ""
	I0723 15:23:11.232726   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.232736   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:11.232742   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:11.232801   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:11.263268   65605 cri.go:89] found id: ""
	I0723 15:23:11.263293   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.263301   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:11.263306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:11.263363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:11.294213   65605 cri.go:89] found id: ""
	I0723 15:23:11.294242   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.294254   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:11.294261   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:11.294340   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:11.324721   65605 cri.go:89] found id: ""
	I0723 15:23:11.324753   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.324766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:11.324773   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:11.324834   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:11.356563   65605 cri.go:89] found id: ""
	I0723 15:23:11.356595   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.356606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:11.356620   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:11.356685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:11.387818   65605 cri.go:89] found id: ""
	I0723 15:23:11.387850   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.387859   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:11.387866   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:11.387926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:11.422612   65605 cri.go:89] found id: ""
	I0723 15:23:11.422639   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.422649   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:11.422659   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:11.422672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:11.475997   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:11.476028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:11.489064   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:11.489095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:11.557384   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:11.557408   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:11.557427   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:11.636906   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:11.636933   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:07.134834   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.636699   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.405125   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.406702   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.153281   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.153390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.154674   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.176790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:14.190898   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:14.190972   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:14.225264   65605 cri.go:89] found id: ""
	I0723 15:23:14.225297   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.225308   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:14.225314   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:14.225378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:14.257092   65605 cri.go:89] found id: ""
	I0723 15:23:14.257119   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.257132   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:14.257138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:14.257201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:14.291068   65605 cri.go:89] found id: ""
	I0723 15:23:14.291095   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.291104   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:14.291111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:14.291170   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:14.324840   65605 cri.go:89] found id: ""
	I0723 15:23:14.324872   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.324881   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:14.324888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:14.324948   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:14.358228   65605 cri.go:89] found id: ""
	I0723 15:23:14.358258   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.358268   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:14.358275   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:14.358333   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:14.389136   65605 cri.go:89] found id: ""
	I0723 15:23:14.389164   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.389174   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:14.389181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:14.389241   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:14.424386   65605 cri.go:89] found id: ""
	I0723 15:23:14.424413   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.424424   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:14.424432   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:14.424492   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:14.457206   65605 cri.go:89] found id: ""
	I0723 15:23:14.457234   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.457244   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:14.457254   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:14.457265   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:14.535708   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:14.535742   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.573579   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:14.573603   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:14.627966   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:14.627994   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:14.641305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:14.641332   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:14.723499   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:12.133966   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.633521   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:16.633785   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.905045   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.653465   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:19.653755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.224268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:17.236467   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:17.236530   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:17.269668   65605 cri.go:89] found id: ""
	I0723 15:23:17.269697   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.269704   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:17.269709   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:17.269753   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:17.300573   65605 cri.go:89] found id: ""
	I0723 15:23:17.300596   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.300603   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:17.300608   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:17.300655   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:17.332627   65605 cri.go:89] found id: ""
	I0723 15:23:17.332653   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.332661   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:17.332666   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:17.332716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:17.363759   65605 cri.go:89] found id: ""
	I0723 15:23:17.363786   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.363794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:17.363799   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:17.363854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:17.396986   65605 cri.go:89] found id: ""
	I0723 15:23:17.397016   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.397023   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:17.397031   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:17.397089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:17.435454   65605 cri.go:89] found id: ""
	I0723 15:23:17.435478   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.435488   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:17.435495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:17.435551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:17.469529   65605 cri.go:89] found id: ""
	I0723 15:23:17.469570   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.469581   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:17.469589   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:17.469654   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:17.505356   65605 cri.go:89] found id: ""
	I0723 15:23:17.505384   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.505395   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:17.505405   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:17.505420   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:17.548656   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:17.548682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:17.602439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:17.602471   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:17.614872   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:17.614902   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:17.684914   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.684939   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:17.684958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.271384   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:20.284619   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:20.284682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:20.319522   65605 cri.go:89] found id: ""
	I0723 15:23:20.319545   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.319552   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:20.319557   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:20.319608   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:20.357359   65605 cri.go:89] found id: ""
	I0723 15:23:20.357385   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.357393   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:20.357399   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:20.357444   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:20.390651   65605 cri.go:89] found id: ""
	I0723 15:23:20.390680   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.390692   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:20.390699   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:20.390757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:20.425243   65605 cri.go:89] found id: ""
	I0723 15:23:20.425274   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.425288   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:20.425295   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:20.425367   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:20.459665   65605 cri.go:89] found id: ""
	I0723 15:23:20.459687   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.459694   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:20.459700   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:20.459749   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:20.494836   65605 cri.go:89] found id: ""
	I0723 15:23:20.494869   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.494879   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:20.494887   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:20.494946   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:20.528807   65605 cri.go:89] found id: ""
	I0723 15:23:20.528839   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.528847   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:20.528854   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:20.528904   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:20.563111   65605 cri.go:89] found id: ""
	I0723 15:23:20.563139   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.563148   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:20.563160   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:20.563175   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:20.576259   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:20.576290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:20.641528   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:20.641551   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:20.641565   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.717413   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:20.717452   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:20.756832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:20.756858   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:19.133570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:21.133680   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:18.404406   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:20.405712   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.904785   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.153273   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:24.654959   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:23.308839   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:23.322122   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:23.322203   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:23.353454   65605 cri.go:89] found id: ""
	I0723 15:23:23.353483   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.353491   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:23.353496   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:23.353550   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:23.385194   65605 cri.go:89] found id: ""
	I0723 15:23:23.385218   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.385226   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:23.385231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:23.385286   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:23.420259   65605 cri.go:89] found id: ""
	I0723 15:23:23.420287   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.420295   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:23.420301   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:23.420366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:23.453107   65605 cri.go:89] found id: ""
	I0723 15:23:23.453134   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.453145   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:23.453152   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:23.453208   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:23.485147   65605 cri.go:89] found id: ""
	I0723 15:23:23.485178   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.485185   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:23.485191   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:23.485239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:23.516682   65605 cri.go:89] found id: ""
	I0723 15:23:23.516709   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.516721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:23.516729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:23.516855   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:23.552804   65605 cri.go:89] found id: ""
	I0723 15:23:23.552836   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.552846   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:23.552853   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:23.552916   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:23.585951   65605 cri.go:89] found id: ""
	I0723 15:23:23.585977   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.585988   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:23.586000   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:23.586014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.641439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:23.641469   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:23.655213   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:23.655243   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:23.726461   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:23.726482   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:23.726496   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:23.806530   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:23.806572   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.346727   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:26.359785   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:26.359854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:26.394547   65605 cri.go:89] found id: ""
	I0723 15:23:26.394583   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.394593   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:26.394600   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:26.394660   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:26.429602   65605 cri.go:89] found id: ""
	I0723 15:23:26.429632   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.429640   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:26.429646   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:26.429735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:26.461875   65605 cri.go:89] found id: ""
	I0723 15:23:26.461902   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.461909   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:26.461916   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:26.461987   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:26.494721   65605 cri.go:89] found id: ""
	I0723 15:23:26.494743   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.494751   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:26.494756   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:26.494802   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:26.530828   65605 cri.go:89] found id: ""
	I0723 15:23:26.530854   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.530863   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:26.530871   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:26.530939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:26.564508   65605 cri.go:89] found id: ""
	I0723 15:23:26.564540   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.564551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:26.564558   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:26.564618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:26.599354   65605 cri.go:89] found id: ""
	I0723 15:23:26.599378   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.599387   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:26.599393   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:26.599460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:26.654360   65605 cri.go:89] found id: ""
	I0723 15:23:26.654409   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.654420   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:26.654429   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:26.654446   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:26.722180   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:26.722212   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:26.722226   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:26.803291   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:26.803324   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.842829   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:26.842860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.633887   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:25.406139   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:27.905699   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.656334   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:29.153898   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.896814   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:26.896854   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.411463   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:29.424509   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:29.424574   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:29.458014   65605 cri.go:89] found id: ""
	I0723 15:23:29.458042   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.458049   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:29.458055   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:29.458108   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:29.492762   65605 cri.go:89] found id: ""
	I0723 15:23:29.492792   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.492802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:29.492809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:29.492862   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:29.526807   65605 cri.go:89] found id: ""
	I0723 15:23:29.526840   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.526851   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:29.526858   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:29.526922   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:29.560110   65605 cri.go:89] found id: ""
	I0723 15:23:29.560133   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.560140   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:29.560146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:29.560195   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:29.596287   65605 cri.go:89] found id: ""
	I0723 15:23:29.596317   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.596327   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:29.596334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:29.596389   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:29.629292   65605 cri.go:89] found id: ""
	I0723 15:23:29.629338   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.629345   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:29.629353   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:29.629404   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:29.666018   65605 cri.go:89] found id: ""
	I0723 15:23:29.666048   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.666058   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:29.666065   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:29.666131   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:29.699967   65605 cri.go:89] found id: ""
	I0723 15:23:29.699996   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.700006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:29.700018   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:29.700034   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:29.749759   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:29.749792   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.763116   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:29.763142   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:29.836309   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:29.836332   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:29.836343   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:29.916337   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:29.916371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:28.633677   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.132726   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:30.405168   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.905063   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.653297   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:33.653432   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.463927   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:32.477072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:32.477150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:32.509915   65605 cri.go:89] found id: ""
	I0723 15:23:32.509938   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.509945   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:32.509952   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:32.510000   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:32.543302   65605 cri.go:89] found id: ""
	I0723 15:23:32.543344   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.543360   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:32.543368   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:32.543438   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:32.579516   65605 cri.go:89] found id: ""
	I0723 15:23:32.579544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.579555   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:32.579562   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:32.579621   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:32.613175   65605 cri.go:89] found id: ""
	I0723 15:23:32.613210   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.613218   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:32.613224   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:32.613282   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:32.646801   65605 cri.go:89] found id: ""
	I0723 15:23:32.646826   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.646835   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:32.646842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:32.646902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:32.683518   65605 cri.go:89] found id: ""
	I0723 15:23:32.683544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.683551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:32.683556   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:32.683611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:32.719448   65605 cri.go:89] found id: ""
	I0723 15:23:32.719475   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.719485   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:32.719490   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:32.719568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:32.752706   65605 cri.go:89] found id: ""
	I0723 15:23:32.752731   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.752738   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:32.752747   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:32.752757   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.800191   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:32.800220   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:32.850990   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:32.851025   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:32.863700   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:32.863729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:32.928054   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:32.928080   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:32.928095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:35.507452   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:35.520681   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:35.520760   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:35.554642   65605 cri.go:89] found id: ""
	I0723 15:23:35.554668   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.554680   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:35.554687   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:35.554750   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:35.585970   65605 cri.go:89] found id: ""
	I0723 15:23:35.585994   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.586004   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:35.586011   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:35.586069   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:35.625178   65605 cri.go:89] found id: ""
	I0723 15:23:35.625202   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.625212   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:35.625226   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:35.625274   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:35.658618   65605 cri.go:89] found id: ""
	I0723 15:23:35.658647   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.658666   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:35.658682   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:35.658742   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:35.696724   65605 cri.go:89] found id: ""
	I0723 15:23:35.696760   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.696768   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:35.696774   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:35.696825   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:35.728399   65605 cri.go:89] found id: ""
	I0723 15:23:35.728426   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.728435   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:35.728440   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:35.728496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:35.758374   65605 cri.go:89] found id: ""
	I0723 15:23:35.758419   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.758429   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:35.758436   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:35.758497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:35.789013   65605 cri.go:89] found id: ""
	I0723 15:23:35.789041   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.789050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:35.789058   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:35.789069   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:35.843703   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:35.843739   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:35.856489   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:35.856514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:35.926784   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:35.926804   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:35.926819   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:36.009552   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:36.009591   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:33.632247   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.633037   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.404984   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:37.905720   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.653742   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.154008   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.545830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:38.560412   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:38.560491   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:38.596495   65605 cri.go:89] found id: ""
	I0723 15:23:38.596521   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.596532   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:38.596538   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:38.596587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:38.635068   65605 cri.go:89] found id: ""
	I0723 15:23:38.635095   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.635104   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:38.635109   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:38.635180   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:38.675832   65605 cri.go:89] found id: ""
	I0723 15:23:38.675876   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.675891   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:38.675897   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:38.675956   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:38.711052   65605 cri.go:89] found id: ""
	I0723 15:23:38.711080   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.711100   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:38.711108   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:38.711171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:38.749437   65605 cri.go:89] found id: ""
	I0723 15:23:38.749479   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.749490   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:38.749498   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:38.749554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:38.790721   65605 cri.go:89] found id: ""
	I0723 15:23:38.790743   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.790751   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:38.790758   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:38.790818   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:38.840127   65605 cri.go:89] found id: ""
	I0723 15:23:38.840156   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.840167   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:38.840174   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:38.840233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:38.895252   65605 cri.go:89] found id: ""
	I0723 15:23:38.895281   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.895291   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:38.895301   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:38.895317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.933441   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:38.933479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:38.987128   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:38.987160   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:39.001547   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:39.001578   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:39.070363   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:39.070398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:39.070413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:41.648668   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:41.664247   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:41.664303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:41.697926   65605 cri.go:89] found id: ""
	I0723 15:23:41.697954   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.697962   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:41.697967   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:41.698014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:41.735306   65605 cri.go:89] found id: ""
	I0723 15:23:41.735336   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.735347   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:41.735355   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:41.735413   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:41.773005   65605 cri.go:89] found id: ""
	I0723 15:23:41.773030   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.773040   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:41.773047   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:41.773105   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:41.806683   65605 cri.go:89] found id: ""
	I0723 15:23:41.806711   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.806722   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:41.806729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:41.806779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:41.842021   65605 cri.go:89] found id: ""
	I0723 15:23:41.842047   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.842063   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:41.842070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:41.842130   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:37.633918   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.132895   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:39.906489   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.405244   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.652778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.656127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:45.155065   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:41.874772   65605 cri.go:89] found id: ""
	I0723 15:23:41.874802   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.874812   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:41.874819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:41.874883   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:41.908618   65605 cri.go:89] found id: ""
	I0723 15:23:41.908643   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.908651   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:41.908656   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:41.908705   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:41.942529   65605 cri.go:89] found id: ""
	I0723 15:23:41.942562   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.942573   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:41.942586   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:41.942601   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:41.995763   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:41.995820   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:42.009263   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:42.009290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:42.076948   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:42.076970   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:42.076989   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:42.157399   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:42.157442   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:44.699439   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:44.712779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:44.712850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:44.746666   65605 cri.go:89] found id: ""
	I0723 15:23:44.746692   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.746701   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:44.746713   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:44.746775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:44.780144   65605 cri.go:89] found id: ""
	I0723 15:23:44.780171   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.780178   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:44.780184   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:44.780240   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:44.816646   65605 cri.go:89] found id: ""
	I0723 15:23:44.816676   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.816688   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:44.816696   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:44.816830   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:44.848830   65605 cri.go:89] found id: ""
	I0723 15:23:44.848860   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.848873   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:44.848880   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:44.848945   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:44.882216   65605 cri.go:89] found id: ""
	I0723 15:23:44.882252   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.882265   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:44.882274   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:44.882363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:44.915894   65605 cri.go:89] found id: ""
	I0723 15:23:44.915921   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.915930   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:44.915937   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:44.916003   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:44.948902   65605 cri.go:89] found id: ""
	I0723 15:23:44.948936   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.948954   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:44.948964   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:44.949034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:44.981658   65605 cri.go:89] found id: ""
	I0723 15:23:44.981685   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.981698   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:44.981709   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:44.981724   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:45.034030   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:45.034063   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:45.047545   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:45.047577   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:45.113885   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:45.113905   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:45.113917   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:45.195865   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:45.195907   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:42.133464   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.633278   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.633730   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.406233   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.904918   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.156318   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:49.653208   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.740466   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:47.752890   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:47.752958   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:47.786124   65605 cri.go:89] found id: ""
	I0723 15:23:47.786149   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.786157   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:47.786162   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:47.786211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:47.818051   65605 cri.go:89] found id: ""
	I0723 15:23:47.818073   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.818081   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:47.818086   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:47.818134   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:47.854144   65605 cri.go:89] found id: ""
	I0723 15:23:47.854168   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.854176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:47.854181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:47.854226   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:47.885781   65605 cri.go:89] found id: ""
	I0723 15:23:47.885809   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.885819   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:47.885826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:47.885888   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:47.917809   65605 cri.go:89] found id: ""
	I0723 15:23:47.917840   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.917850   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:47.917857   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:47.917921   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:47.950041   65605 cri.go:89] found id: ""
	I0723 15:23:47.950069   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.950078   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:47.950085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:47.950145   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:47.983108   65605 cri.go:89] found id: ""
	I0723 15:23:47.983143   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.983154   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:47.983163   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:47.983232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:48.014560   65605 cri.go:89] found id: ""
	I0723 15:23:48.014604   65605 logs.go:276] 0 containers: []
	W0723 15:23:48.014612   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:48.014621   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:48.014638   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:48.027469   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:48.027494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:48.097571   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:48.097601   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:48.097615   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:48.178586   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:48.178618   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:48.215769   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:48.215794   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:50.768087   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:50.781396   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:50.781467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:50.817297   65605 cri.go:89] found id: ""
	I0723 15:23:50.817327   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.817335   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:50.817341   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:50.817388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:50.850439   65605 cri.go:89] found id: ""
	I0723 15:23:50.850467   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.850476   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:50.850483   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:50.850552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:50.884601   65605 cri.go:89] found id: ""
	I0723 15:23:50.884630   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.884641   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:50.884649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:50.884714   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:50.918971   65605 cri.go:89] found id: ""
	I0723 15:23:50.918996   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.919004   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:50.919010   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:50.919072   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:50.951244   65605 cri.go:89] found id: ""
	I0723 15:23:50.951277   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.951284   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:50.951290   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:50.951360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:50.983289   65605 cri.go:89] found id: ""
	I0723 15:23:50.983326   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.983334   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:50.983339   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:50.983392   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:51.019584   65605 cri.go:89] found id: ""
	I0723 15:23:51.019614   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.019624   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:51.019631   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:51.019693   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:51.050981   65605 cri.go:89] found id: ""
	I0723 15:23:51.051005   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.051014   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:51.051023   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:51.051038   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:51.088826   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:51.088852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:51.141369   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:51.141401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:51.155419   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:51.155450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:51.222640   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:51.222662   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:51.222675   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:49.133154   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.632559   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:48.905876   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.404543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.654814   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:54.153611   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.802706   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:53.815926   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:53.815985   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:53.847867   65605 cri.go:89] found id: ""
	I0723 15:23:53.847900   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.847913   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:53.847921   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:53.847981   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:53.881461   65605 cri.go:89] found id: ""
	I0723 15:23:53.881489   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.881499   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:53.881506   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:53.881569   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:53.921025   65605 cri.go:89] found id: ""
	I0723 15:23:53.921059   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.921070   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:53.921076   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:53.921135   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:53.955219   65605 cri.go:89] found id: ""
	I0723 15:23:53.955242   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.955250   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:53.955255   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:53.955318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:53.991874   65605 cri.go:89] found id: ""
	I0723 15:23:53.991905   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.991915   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:53.991922   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:53.991986   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:54.024702   65605 cri.go:89] found id: ""
	I0723 15:23:54.024735   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.024745   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:54.024752   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:54.024819   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:54.063778   65605 cri.go:89] found id: ""
	I0723 15:23:54.063801   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.063808   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:54.063813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:54.063861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:54.098194   65605 cri.go:89] found id: ""
	I0723 15:23:54.098222   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.098232   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:54.098244   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:54.098258   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:54.148576   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:54.148617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:54.162561   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:54.162596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:54.236614   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:54.236647   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:54.236663   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:54.315900   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:54.315932   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:53.632910   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.633683   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.404873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.904545   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:57.904874   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.153719   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:58.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.853674   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:56.867190   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:56.867270   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:56.901757   65605 cri.go:89] found id: ""
	I0723 15:23:56.901782   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.901792   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:56.901799   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:56.901858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:56.943877   65605 cri.go:89] found id: ""
	I0723 15:23:56.943909   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.943920   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:56.943926   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:56.943983   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:56.977156   65605 cri.go:89] found id: ""
	I0723 15:23:56.977186   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.977194   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:56.977200   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:56.977260   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:57.009251   65605 cri.go:89] found id: ""
	I0723 15:23:57.009280   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.009290   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:57.009297   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:57.009362   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:57.041196   65605 cri.go:89] found id: ""
	I0723 15:23:57.041225   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.041236   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:57.041243   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:57.041295   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:57.081725   65605 cri.go:89] found id: ""
	I0723 15:23:57.081752   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.081760   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:57.081765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:57.081810   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:57.114457   65605 cri.go:89] found id: ""
	I0723 15:23:57.114482   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.114490   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:57.114495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:57.114551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:57.149775   65605 cri.go:89] found id: ""
	I0723 15:23:57.149803   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.149814   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:57.149824   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:57.149838   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:57.197984   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:57.198014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:57.210717   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:57.210743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:57.271374   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:57.271392   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:57.271403   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:57.346151   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:57.346185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:59.882368   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:59.895184   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:59.895257   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:59.928859   65605 cri.go:89] found id: ""
	I0723 15:23:59.928891   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.928902   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:59.928909   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:59.928967   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:59.962441   65605 cri.go:89] found id: ""
	I0723 15:23:59.962472   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.962483   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:59.962491   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:59.962570   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:59.996637   65605 cri.go:89] found id: ""
	I0723 15:23:59.996659   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.996667   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:59.996672   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:59.996720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:00.029291   65605 cri.go:89] found id: ""
	I0723 15:24:00.029320   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.029330   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:00.029338   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:00.029387   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:00.060869   65605 cri.go:89] found id: ""
	I0723 15:24:00.060898   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.060907   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:00.060912   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:00.060993   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:00.092010   65605 cri.go:89] found id: ""
	I0723 15:24:00.092042   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.092054   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:00.092063   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:00.092128   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:00.124914   65605 cri.go:89] found id: ""
	I0723 15:24:00.124940   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.124949   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:00.124955   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:00.125016   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:00.159927   65605 cri.go:89] found id: ""
	I0723 15:24:00.159953   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.159962   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:00.159977   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:00.159993   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:00.209719   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:00.209764   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:00.224757   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:00.224784   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:00.292079   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:00.292100   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:00.292113   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:00.377382   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:00.377415   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:58.132374   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.133083   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:59.906087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.404839   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.655745   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.658870   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.153217   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.916818   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:02.931524   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:02.931594   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:02.966440   65605 cri.go:89] found id: ""
	I0723 15:24:02.966462   65605 logs.go:276] 0 containers: []
	W0723 15:24:02.966470   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:02.966475   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:02.966525   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:03.000833   65605 cri.go:89] found id: ""
	I0723 15:24:03.000857   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.000865   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:03.000870   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:03.000918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:03.035531   65605 cri.go:89] found id: ""
	I0723 15:24:03.035559   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.035570   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:03.035577   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:03.035636   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:03.068376   65605 cri.go:89] found id: ""
	I0723 15:24:03.068401   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.068411   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:03.068418   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:03.068479   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:03.102499   65605 cri.go:89] found id: ""
	I0723 15:24:03.102532   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.102543   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:03.102549   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:03.102600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:03.137173   65605 cri.go:89] found id: ""
	I0723 15:24:03.137198   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.137207   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:03.137215   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:03.137259   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:03.170652   65605 cri.go:89] found id: ""
	I0723 15:24:03.170677   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.170685   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:03.170690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:03.170748   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:03.204828   65605 cri.go:89] found id: ""
	I0723 15:24:03.204855   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.204864   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:03.204875   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:03.204895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:03.287370   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:03.287413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:03.323855   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:03.323888   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:03.379809   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:03.379846   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:03.392944   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:03.392971   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:03.465681   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:05.966635   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:05.979888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:05.979949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:06.013706   65605 cri.go:89] found id: ""
	I0723 15:24:06.013733   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.013740   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:06.013746   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:06.013794   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:06.046584   65605 cri.go:89] found id: ""
	I0723 15:24:06.046612   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.046622   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:06.046630   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:06.046690   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:06.077379   65605 cri.go:89] found id: ""
	I0723 15:24:06.077407   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.077416   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:06.077422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:06.077488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:06.108946   65605 cri.go:89] found id: ""
	I0723 15:24:06.108975   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.108986   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:06.108993   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:06.109058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:06.143082   65605 cri.go:89] found id: ""
	I0723 15:24:06.143115   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.143123   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:06.143129   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:06.143178   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:06.182735   65605 cri.go:89] found id: ""
	I0723 15:24:06.182762   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.182772   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:06.182779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:06.182839   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:06.217613   65605 cri.go:89] found id: ""
	I0723 15:24:06.217640   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.217650   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:06.217657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:06.217720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:06.252739   65605 cri.go:89] found id: ""
	I0723 15:24:06.252775   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.252787   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:06.252800   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:06.252814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:06.304325   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:06.304358   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:06.317426   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:06.317450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:06.384284   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:06.384313   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:06.384329   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:06.460936   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:06.460974   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:02.632839   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.132547   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:04.404942   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:06.406131   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:07.153476   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.154627   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.000304   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:09.013544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:09.013618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:09.046414   65605 cri.go:89] found id: ""
	I0723 15:24:09.046442   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.046452   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:09.046459   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:09.046522   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:09.083183   65605 cri.go:89] found id: ""
	I0723 15:24:09.083214   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.083225   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:09.083231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:09.083292   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:09.117524   65605 cri.go:89] found id: ""
	I0723 15:24:09.117568   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.117578   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:09.117585   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:09.117647   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:09.152624   65605 cri.go:89] found id: ""
	I0723 15:24:09.152652   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.152667   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:09.152674   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:09.152735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:09.186918   65605 cri.go:89] found id: ""
	I0723 15:24:09.186943   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.186951   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:09.186957   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:09.187017   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:09.219857   65605 cri.go:89] found id: ""
	I0723 15:24:09.219889   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.219909   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:09.219917   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:09.219980   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:09.253364   65605 cri.go:89] found id: ""
	I0723 15:24:09.253392   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.253402   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:09.253409   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:09.253469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:09.285049   65605 cri.go:89] found id: ""
	I0723 15:24:09.285072   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.285079   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:09.285088   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:09.285099   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:09.336011   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:09.336046   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:09.349643   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:09.349672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:09.428156   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:09.428181   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:09.428200   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:09.513917   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:09.513977   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:07.632840   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.636373   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:08.904674   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.405130   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.653749   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.153549   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:12.053554   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:12.067177   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:12.067242   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:12.097265   65605 cri.go:89] found id: ""
	I0723 15:24:12.097289   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.097298   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:12.097305   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:12.097378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:12.129832   65605 cri.go:89] found id: ""
	I0723 15:24:12.129858   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.129868   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:12.129876   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:12.129938   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:12.164173   65605 cri.go:89] found id: ""
	I0723 15:24:12.164202   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.164213   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:12.164221   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:12.164275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:12.196604   65605 cri.go:89] found id: ""
	I0723 15:24:12.196637   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.196648   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:12.196655   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:12.196725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:12.239120   65605 cri.go:89] found id: ""
	I0723 15:24:12.239149   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.239158   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:12.239164   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:12.239232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:12.273806   65605 cri.go:89] found id: ""
	I0723 15:24:12.273836   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.273847   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:12.273855   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:12.273908   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:12.305937   65605 cri.go:89] found id: ""
	I0723 15:24:12.305965   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.305976   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:12.305984   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:12.306045   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:12.337795   65605 cri.go:89] found id: ""
	I0723 15:24:12.337822   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.337830   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:12.337839   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:12.337850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:12.390476   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:12.390512   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:12.405397   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:12.405422   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:12.474687   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:12.474711   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:12.474730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:12.551302   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:12.551341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:15.094530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:15.108194   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:15.108267   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:15.141068   65605 cri.go:89] found id: ""
	I0723 15:24:15.141095   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.141103   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:15.141109   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:15.141167   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:15.176226   65605 cri.go:89] found id: ""
	I0723 15:24:15.176260   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.176276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:15.176284   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:15.176348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:15.209086   65605 cri.go:89] found id: ""
	I0723 15:24:15.209115   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.209123   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:15.209128   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:15.209175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:15.245808   65605 cri.go:89] found id: ""
	I0723 15:24:15.245842   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.245853   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:15.245863   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:15.245926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:15.277680   65605 cri.go:89] found id: ""
	I0723 15:24:15.277710   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.277720   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:15.277728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:15.277789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:15.308419   65605 cri.go:89] found id: ""
	I0723 15:24:15.308443   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.308450   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:15.308456   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:15.308515   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:15.340785   65605 cri.go:89] found id: ""
	I0723 15:24:15.340812   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.340820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:15.340825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:15.340871   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:15.376014   65605 cri.go:89] found id: ""
	I0723 15:24:15.376040   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.376050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:15.376061   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:15.376074   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:15.427672   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:15.427706   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:15.441726   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:15.441755   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:15.508628   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:15.508659   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:15.508674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:15.589246   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:15.589284   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:12.133283   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.632399   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:13.905548   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.405913   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.652810   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.653725   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.128036   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:18.141529   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:18.141604   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:18.176401   65605 cri.go:89] found id: ""
	I0723 15:24:18.176434   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.176446   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:18.176453   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:18.176507   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:18.209833   65605 cri.go:89] found id: ""
	I0723 15:24:18.209868   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.209878   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:18.209886   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:18.209949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:18.243094   65605 cri.go:89] found id: ""
	I0723 15:24:18.243129   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.243139   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:18.243146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:18.243211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:18.275929   65605 cri.go:89] found id: ""
	I0723 15:24:18.275957   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.275968   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:18.275980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:18.276037   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:18.309064   65605 cri.go:89] found id: ""
	I0723 15:24:18.309095   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.309103   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:18.309109   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:18.309171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:18.345446   65605 cri.go:89] found id: ""
	I0723 15:24:18.345475   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.345485   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:18.345491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:18.345552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:18.381774   65605 cri.go:89] found id: ""
	I0723 15:24:18.381808   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.381820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:18.381827   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:18.381881   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:18.435663   65605 cri.go:89] found id: ""
	I0723 15:24:18.435692   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.435706   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:18.435716   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:18.435729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.471152   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:18.471184   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:18.523114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:18.523146   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:18.536555   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:18.536594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:18.607773   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:18.607792   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:18.607803   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.192781   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:21.205337   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:21.205403   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:21.242125   65605 cri.go:89] found id: ""
	I0723 15:24:21.242155   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.242163   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:21.242170   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:21.242243   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:21.279245   65605 cri.go:89] found id: ""
	I0723 15:24:21.279274   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.279286   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:21.279295   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:21.279361   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:21.311316   65605 cri.go:89] found id: ""
	I0723 15:24:21.311340   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.311348   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:21.311355   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:21.311415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:21.344444   65605 cri.go:89] found id: ""
	I0723 15:24:21.344468   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.344478   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:21.344485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:21.344545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:21.381055   65605 cri.go:89] found id: ""
	I0723 15:24:21.381082   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.381092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:21.381099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:21.381158   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:21.416593   65605 cri.go:89] found id: ""
	I0723 15:24:21.416621   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.416633   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:21.416643   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:21.416706   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:21.448345   65605 cri.go:89] found id: ""
	I0723 15:24:21.448368   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.448377   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:21.448382   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:21.448426   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:21.481810   65605 cri.go:89] found id: ""
	I0723 15:24:21.481836   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.481843   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:21.481852   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:21.481874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:21.545200   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:21.545227   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:21.545244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.626037   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:21.626073   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:21.667961   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:21.667998   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:21.718622   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:21.718662   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:17.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:19.632774   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.632954   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.905257   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:20.906323   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.153330   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.153495   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:24.233086   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:24.247111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:24.247175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:24.281818   65605 cri.go:89] found id: ""
	I0723 15:24:24.281850   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.281861   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:24.281868   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:24.281924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:24.315621   65605 cri.go:89] found id: ""
	I0723 15:24:24.315647   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.315656   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:24.315664   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:24.315722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:24.350355   65605 cri.go:89] found id: ""
	I0723 15:24:24.350400   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.350410   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:24.350417   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:24.350498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:24.384584   65605 cri.go:89] found id: ""
	I0723 15:24:24.384611   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.384619   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:24.384625   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:24.384671   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:24.423669   65605 cri.go:89] found id: ""
	I0723 15:24:24.423694   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.423701   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:24.423707   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:24.423754   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:24.456572   65605 cri.go:89] found id: ""
	I0723 15:24:24.456599   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.456606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:24.456611   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:24.456659   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:24.488024   65605 cri.go:89] found id: ""
	I0723 15:24:24.488047   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.488055   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:24.488061   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:24.488109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:24.519311   65605 cri.go:89] found id: ""
	I0723 15:24:24.519344   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.519352   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:24.519360   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:24.519371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:24.568552   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:24.568594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.581845   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:24.581874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:24.650455   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:24.650478   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:24.650492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:24.728143   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:24.728179   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:23.633012   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:26.132417   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.405046   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.906015   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.654555   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.152778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.268112   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:27.281947   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:27.282025   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:27.315489   65605 cri.go:89] found id: ""
	I0723 15:24:27.315517   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.315528   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:27.315536   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:27.315599   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:27.348481   65605 cri.go:89] found id: ""
	I0723 15:24:27.348509   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.348519   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:27.348526   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:27.348580   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:27.380628   65605 cri.go:89] found id: ""
	I0723 15:24:27.380659   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.380668   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:27.380673   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:27.380731   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:27.413647   65605 cri.go:89] found id: ""
	I0723 15:24:27.413679   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.413688   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:27.413693   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:27.413744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:27.450398   65605 cri.go:89] found id: ""
	I0723 15:24:27.450425   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.450436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:27.450442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:27.450494   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:27.489071   65605 cri.go:89] found id: ""
	I0723 15:24:27.489101   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.489117   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:27.489125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:27.489190   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:27.529785   65605 cri.go:89] found id: ""
	I0723 15:24:27.529813   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.529823   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:27.529829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:27.529876   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:27.560811   65605 cri.go:89] found id: ""
	I0723 15:24:27.560843   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.560855   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:27.560866   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:27.560882   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:27.574078   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:27.574100   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:27.636153   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:27.636179   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:27.636194   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:27.714001   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:27.714041   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.751396   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:27.751428   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.307581   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:30.319762   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:30.319823   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:30.354317   65605 cri.go:89] found id: ""
	I0723 15:24:30.354341   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.354349   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:30.354355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:30.354429   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:30.389994   65605 cri.go:89] found id: ""
	I0723 15:24:30.390026   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.390039   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:30.390048   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:30.390122   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:30.428854   65605 cri.go:89] found id: ""
	I0723 15:24:30.428878   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.428887   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:30.428893   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:30.428966   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:30.461727   65605 cri.go:89] found id: ""
	I0723 15:24:30.461752   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.461759   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:30.461765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:30.461813   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:30.494777   65605 cri.go:89] found id: ""
	I0723 15:24:30.494799   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.494807   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:30.494813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:30.494858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:30.531918   65605 cri.go:89] found id: ""
	I0723 15:24:30.531943   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.531954   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:30.531960   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:30.532034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:30.590683   65605 cri.go:89] found id: ""
	I0723 15:24:30.590710   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.590720   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:30.590727   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:30.590772   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:30.636073   65605 cri.go:89] found id: ""
	I0723 15:24:30.636104   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.636114   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:30.636124   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:30.636138   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.686233   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:30.686268   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:30.700266   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:30.700308   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:30.773850   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:30.773868   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:30.773879   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:30.854428   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:30.854464   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:28.633061   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.633604   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:28.404488   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.406038   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.905405   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.653390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.153739   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:33.393374   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:33.406722   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:33.406779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:33.440555   65605 cri.go:89] found id: ""
	I0723 15:24:33.440585   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.440596   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:33.440604   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:33.440666   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:33.473363   65605 cri.go:89] found id: ""
	I0723 15:24:33.473389   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.473398   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:33.473405   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:33.473469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:33.509772   65605 cri.go:89] found id: ""
	I0723 15:24:33.509805   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.509816   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:33.509829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:33.509896   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:33.546578   65605 cri.go:89] found id: ""
	I0723 15:24:33.546605   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.546613   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:33.546618   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:33.546686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:33.582735   65605 cri.go:89] found id: ""
	I0723 15:24:33.582759   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.582766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:33.582771   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:33.582831   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:33.619013   65605 cri.go:89] found id: ""
	I0723 15:24:33.619039   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.619048   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:33.619053   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:33.619110   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:33.655967   65605 cri.go:89] found id: ""
	I0723 15:24:33.655988   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.655995   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:33.656001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:33.656058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:33.694266   65605 cri.go:89] found id: ""
	I0723 15:24:33.694303   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.694311   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:33.694319   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:33.694330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:33.744464   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:33.744504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:33.759314   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:33.759342   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:33.832308   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:33.832331   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:33.832364   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:33.910820   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:33.910860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.452804   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:36.465137   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:36.465224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:36.504340   65605 cri.go:89] found id: ""
	I0723 15:24:36.504371   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.504380   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:36.504385   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:36.504436   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:36.539113   65605 cri.go:89] found id: ""
	I0723 15:24:36.539138   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.539147   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:36.539154   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:36.539215   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:36.572443   65605 cri.go:89] found id: ""
	I0723 15:24:36.572468   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.572478   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:36.572485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:36.572540   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:36.605366   65605 cri.go:89] found id: ""
	I0723 15:24:36.605391   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.605398   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:36.605404   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:36.605467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:36.637467   65605 cri.go:89] found id: ""
	I0723 15:24:36.637496   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.637506   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:36.637513   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:36.637576   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:36.674630   65605 cri.go:89] found id: ""
	I0723 15:24:36.674652   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.674661   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:36.674669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:36.674722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:36.707409   65605 cri.go:89] found id: ""
	I0723 15:24:36.707500   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.707511   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:36.707525   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:36.707581   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:36.742746   65605 cri.go:89] found id: ""
	I0723 15:24:36.742771   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.742778   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:36.742786   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:36.742800   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.776474   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:36.776498   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:36.826256   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:36.826289   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:36.839568   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:36.839596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:24:33.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.632486   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.405071   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.406177   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.653785   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.654028   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	W0723 15:24:36.906055   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:36.906082   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:36.906095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:39.483791   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:39.496085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:39.496150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:39.527545   65605 cri.go:89] found id: ""
	I0723 15:24:39.527573   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.527583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:39.527590   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:39.527653   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:39.562024   65605 cri.go:89] found id: ""
	I0723 15:24:39.562051   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.562060   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:39.562066   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:39.562115   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:39.600294   65605 cri.go:89] found id: ""
	I0723 15:24:39.600317   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.600324   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:39.600329   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:39.600378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:39.635629   65605 cri.go:89] found id: ""
	I0723 15:24:39.635653   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.635663   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:39.635669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:39.635729   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:39.672815   65605 cri.go:89] found id: ""
	I0723 15:24:39.672843   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.672854   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:39.672861   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:39.672924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:39.705965   65605 cri.go:89] found id: ""
	I0723 15:24:39.705999   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.706009   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:39.706023   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:39.706077   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:39.739262   65605 cri.go:89] found id: ""
	I0723 15:24:39.739288   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.739298   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:39.739304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:39.739373   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:39.771786   65605 cri.go:89] found id: ""
	I0723 15:24:39.771811   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.771820   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:39.771831   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:39.771844   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:39.813596   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:39.813628   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:39.861596   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:39.861629   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:39.875843   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:39.875867   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:39.947917   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:39.947941   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:39.947958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:38.135033   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:40.633462   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.906043   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.404845   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.153505   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:44.154094   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.530636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:42.543636   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:42.543718   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:42.576613   65605 cri.go:89] found id: ""
	I0723 15:24:42.576642   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.576652   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:42.576659   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:42.576723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:42.611422   65605 cri.go:89] found id: ""
	I0723 15:24:42.611452   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.611460   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:42.611465   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:42.611514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:42.647346   65605 cri.go:89] found id: ""
	I0723 15:24:42.647370   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.647380   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:42.647386   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:42.647447   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:42.683587   65605 cri.go:89] found id: ""
	I0723 15:24:42.683614   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.683622   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:42.683627   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:42.683673   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:42.715688   65605 cri.go:89] found id: ""
	I0723 15:24:42.715709   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.715717   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:42.715723   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:42.715775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:42.749589   65605 cri.go:89] found id: ""
	I0723 15:24:42.749624   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.749632   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:42.749637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:42.749684   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:42.786668   65605 cri.go:89] found id: ""
	I0723 15:24:42.786694   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.786702   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:42.786708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:42.786757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:42.821541   65605 cri.go:89] found id: ""
	I0723 15:24:42.821574   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.821585   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:42.821597   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:42.821612   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:42.873689   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:42.873720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:42.886689   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:42.886719   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:42.958057   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:42.958078   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:42.958093   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:43.042738   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:43.042771   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:45.580764   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:45.593331   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:45.593402   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:45.632356   65605 cri.go:89] found id: ""
	I0723 15:24:45.632386   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.632397   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:45.632404   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:45.632460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:45.674319   65605 cri.go:89] found id: ""
	I0723 15:24:45.674353   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.674363   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:45.674371   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:45.674450   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:45.718577   65605 cri.go:89] found id: ""
	I0723 15:24:45.718608   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.718616   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:45.718622   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:45.718686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:45.758866   65605 cri.go:89] found id: ""
	I0723 15:24:45.758894   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.758901   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:45.758907   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:45.758954   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:45.795098   65605 cri.go:89] found id: ""
	I0723 15:24:45.795124   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.795134   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:45.795148   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:45.795224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:45.832205   65605 cri.go:89] found id: ""
	I0723 15:24:45.832236   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.832257   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:45.832266   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:45.832348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:45.867679   65605 cri.go:89] found id: ""
	I0723 15:24:45.867713   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.867725   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:45.867733   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:45.867799   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:45.904960   65605 cri.go:89] found id: ""
	I0723 15:24:45.904999   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.905010   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:45.905022   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:45.905036   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:45.962373   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:45.962434   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:45.978670   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:45.978715   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:46.050765   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:46.050795   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:46.050811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:46.145347   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:46.145387   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:43.132518   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:45.133735   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:43.399717   65177 pod_ready.go:81] duration metric: took 4m0.000898156s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	E0723 15:24:43.399747   65177 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0723 15:24:43.399766   65177 pod_ready.go:38] duration metric: took 4m8.000231971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:24:43.399796   65177 kubeadm.go:597] duration metric: took 4m15.901150134s to restartPrimaryControlPlane
	W0723 15:24:43.399891   65177 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:43.399930   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:46.154147   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.653381   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.691420   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:48.704605   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:48.704662   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:48.736998   65605 cri.go:89] found id: ""
	I0723 15:24:48.737030   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.737040   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:48.737048   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:48.737116   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:48.770428   65605 cri.go:89] found id: ""
	I0723 15:24:48.770456   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.770466   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:48.770474   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:48.770534   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:48.804036   65605 cri.go:89] found id: ""
	I0723 15:24:48.804063   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.804073   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:48.804080   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:48.804140   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:48.841221   65605 cri.go:89] found id: ""
	I0723 15:24:48.841247   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.841256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:48.841263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:48.841345   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:48.877239   65605 cri.go:89] found id: ""
	I0723 15:24:48.877269   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.877280   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:48.877288   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:48.877348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:48.910120   65605 cri.go:89] found id: ""
	I0723 15:24:48.910144   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.910153   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:48.910161   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:48.910222   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:48.944831   65605 cri.go:89] found id: ""
	I0723 15:24:48.944861   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.944872   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:48.944881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:48.944936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:48.978782   65605 cri.go:89] found id: ""
	I0723 15:24:48.978811   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.978821   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:48.978832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:48.978850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:49.031863   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:49.031900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:49.045173   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:49.045196   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:49.115607   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:49.115632   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:49.115644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:49.195137   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:49.195186   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:51.732915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:51.746885   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:51.746970   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:51.787857   65605 cri.go:89] found id: ""
	I0723 15:24:51.787878   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.787885   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:51.787890   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:51.787933   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:51.826515   65605 cri.go:89] found id: ""
	I0723 15:24:51.826537   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.826545   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:51.826550   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:51.826611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:47.634980   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:50.132905   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.153224   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:53.153400   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.863825   65605 cri.go:89] found id: ""
	I0723 15:24:51.863867   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.863878   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:51.863884   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:51.863936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:51.901367   65605 cri.go:89] found id: ""
	I0723 15:24:51.901403   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.901414   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:51.901422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:51.901474   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:51.933270   65605 cri.go:89] found id: ""
	I0723 15:24:51.933303   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.933314   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:51.933321   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:51.933385   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:51.965174   65605 cri.go:89] found id: ""
	I0723 15:24:51.965205   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.965217   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:51.965227   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:51.965296   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:51.999785   65605 cri.go:89] found id: ""
	I0723 15:24:51.999812   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.999822   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:51.999841   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:51.999914   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:52.035592   65605 cri.go:89] found id: ""
	I0723 15:24:52.035619   65605 logs.go:276] 0 containers: []
	W0723 15:24:52.035630   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:52.035641   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:52.035656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:52.048683   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:52.048711   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:52.112319   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:52.112338   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:52.112351   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:52.196596   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:52.196632   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:52.235608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:52.235635   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:54.786414   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:54.799864   65605 kubeadm.go:597] duration metric: took 4m4.703331486s to restartPrimaryControlPlane
	W0723 15:24:54.799946   65605 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:54.799996   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:52.134857   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:54.633070   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:55.653385   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.154569   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.675405   65605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.875388525s)
	I0723 15:24:58.675461   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:24:58.689878   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:24:58.699568   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:24:58.708541   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:24:58.708559   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:24:58.708604   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:24:58.717055   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:24:58.717108   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:24:58.725736   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:24:58.734127   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:24:58.734227   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:24:58.742862   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.750696   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:24:58.750747   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.759235   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:24:58.768036   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:24:58.768094   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:24:58.777299   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:24:58.976177   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:24:57.133412   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:59.633162   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:00.652486   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.653128   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.654556   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.132762   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.134714   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:06.632391   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:07.152861   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:09.153443   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:08.633329   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.133963   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.652964   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:13.653225   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:14.921745   65177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.521789017s)
	I0723 15:25:14.921814   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:14.937627   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:25:14.948238   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:25:14.958145   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:25:14.958171   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:25:14.958223   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:25:14.967224   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:25:14.967282   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:25:14.975995   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:25:14.984981   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:25:14.985040   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:25:14.993733   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.002214   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:25:15.002265   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.012952   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:25:15.022716   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:25:15.022775   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:25:15.032954   65177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:25:15.081347   65177 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 15:25:15.081412   65177 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:25:15.217189   65177 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:25:15.217316   65177 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:25:15.217421   65177 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:25:15.414012   65177 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:25:15.415975   65177 out.go:204]   - Generating certificates and keys ...
	I0723 15:25:15.416086   65177 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:25:15.416172   65177 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:25:15.416284   65177 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:25:15.416378   65177 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:25:15.416512   65177 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:25:15.416600   65177 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:25:15.416690   65177 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:25:15.416781   65177 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:25:15.416901   65177 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:25:15.417027   65177 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:25:15.417091   65177 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:25:15.417169   65177 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:25:15.577526   65177 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:25:15.771865   65177 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 15:25:15.968841   65177 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:25:16.376626   65177 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:25:16.569425   65177 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:25:16.570004   65177 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:25:16.572623   65177 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:25:13.633779   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.133051   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.574399   65177 out.go:204]   - Booting up control plane ...
	I0723 15:25:16.574516   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:25:16.574622   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:25:16.575046   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:25:16.594177   65177 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:25:16.595205   65177 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:25:16.595310   65177 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:25:16.739893   65177 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 15:25:16.740022   65177 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 15:25:17.242030   65177 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.858581ms
	I0723 15:25:17.242119   65177 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 15:25:15.653757   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.153924   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:20.154226   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.634047   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:21.132773   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:22.244539   65177 kubeadm.go:310] [api-check] The API server is healthy after 5.002291296s
	I0723 15:25:22.260367   65177 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 15:25:22.272659   65177 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 15:25:22.304686   65177 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 15:25:22.304939   65177 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-486436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 15:25:22.318299   65177 kubeadm.go:310] [bootstrap-token] Using token: 1476j9.4ihrwdjbg4aq5odf
	I0723 15:25:22.319736   65177 out.go:204]   - Configuring RBAC rules ...
	I0723 15:25:22.319899   65177 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 15:25:22.329081   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 15:25:22.340687   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 15:25:22.344962   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 15:25:22.348526   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 15:25:22.355955   65177 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 15:25:22.652467   65177 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 15:25:23.122105   65177 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 15:25:23.653074   65177 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 15:25:23.654335   65177 kubeadm.go:310] 
	I0723 15:25:23.654448   65177 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 15:25:23.654461   65177 kubeadm.go:310] 
	I0723 15:25:23.654580   65177 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 15:25:23.654599   65177 kubeadm.go:310] 
	I0723 15:25:23.654648   65177 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 15:25:23.654721   65177 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 15:25:23.654796   65177 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 15:25:23.654821   65177 kubeadm.go:310] 
	I0723 15:25:23.654902   65177 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 15:25:23.654925   65177 kubeadm.go:310] 
	I0723 15:25:23.655000   65177 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 15:25:23.655010   65177 kubeadm.go:310] 
	I0723 15:25:23.655076   65177 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 15:25:23.655174   65177 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 15:25:23.655256   65177 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 15:25:23.655264   65177 kubeadm.go:310] 
	I0723 15:25:23.655352   65177 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 15:25:23.655440   65177 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 15:25:23.655459   65177 kubeadm.go:310] 
	I0723 15:25:23.655579   65177 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.655719   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 15:25:23.655752   65177 kubeadm.go:310] 	--control-plane 
	I0723 15:25:23.655771   65177 kubeadm.go:310] 
	I0723 15:25:23.655896   65177 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 15:25:23.655904   65177 kubeadm.go:310] 
	I0723 15:25:23.656005   65177 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.656141   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 15:25:23.656644   65177 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:25:23.656674   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:25:23.656686   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:25:23.659688   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:25:22.653874   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:24.654172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.133652   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:25.633189   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.660997   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:25:23.671788   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:25:23.692109   65177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:25:23.692195   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:23.692199   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-486436 minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=embed-certs-486436 minikube.k8s.io/primary=true
	I0723 15:25:23.716101   65177 ops.go:34] apiserver oom_adj: -16
	I0723 15:25:23.905952   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.405980   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.906787   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.406096   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.906365   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.406501   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.906068   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.406018   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.907033   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.153085   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.653377   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:27.633816   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.133531   66641 pod_ready.go:81] duration metric: took 4m0.007080073s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:29.133554   66641 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:29.133561   66641 pod_ready.go:38] duration metric: took 4m4.545428088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:29.133577   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:29.133601   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:29.133646   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:29.179796   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:29.179818   66641 cri.go:89] found id: ""
	I0723 15:25:29.179830   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:29.179882   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.184024   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:29.184095   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:29.219711   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:29.219740   66641 cri.go:89] found id: ""
	I0723 15:25:29.219749   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:29.219814   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.223687   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:29.223761   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:29.258473   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:29.258498   66641 cri.go:89] found id: ""
	I0723 15:25:29.258508   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:29.258556   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.262789   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:29.262857   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:29.304206   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.304233   66641 cri.go:89] found id: ""
	I0723 15:25:29.304242   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:29.304306   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.309658   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:29.309735   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:29.361664   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:29.361690   66641 cri.go:89] found id: ""
	I0723 15:25:29.361699   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:29.361758   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.366171   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:29.366248   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:29.414069   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:29.414094   66641 cri.go:89] found id: ""
	I0723 15:25:29.414104   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:29.414162   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.419607   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:29.419678   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:29.464533   66641 cri.go:89] found id: ""
	I0723 15:25:29.464563   66641 logs.go:276] 0 containers: []
	W0723 15:25:29.464573   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:29.464580   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:29.464640   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:29.499966   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:29.499991   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:29.499996   66641 cri.go:89] found id: ""
	I0723 15:25:29.500006   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:29.500063   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.503961   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.508088   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:29.508109   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:29.653373   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:29.653403   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.694171   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:29.694205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:30.262503   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:30.262559   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:30.304038   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:30.304070   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:30.357964   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:30.358013   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:30.372263   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:30.372296   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:30.418543   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:30.418583   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:30.470018   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:30.470050   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:30.503538   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:30.503579   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:30.538515   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:30.538554   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:30.599104   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:30.599137   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:30.635841   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:30.635867   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:28.406535   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:28.906729   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.406804   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.906364   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.406245   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.906646   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.406143   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.906645   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.406411   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.906643   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.653490   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.654773   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.406893   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:33.906016   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.406827   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.906668   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.406337   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.906162   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.406864   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.502155   65177 kubeadm.go:1113] duration metric: took 12.810025657s to wait for elevateKubeSystemPrivileges
	I0723 15:25:36.502200   65177 kubeadm.go:394] duration metric: took 5m9.050239878s to StartCluster
	I0723 15:25:36.502225   65177 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.502332   65177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:25:36.504959   65177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.505284   65177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:25:36.505373   65177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:25:36.505452   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:25:36.505461   65177 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-486436"
	I0723 15:25:36.505486   65177 addons.go:69] Setting metrics-server=true in profile "embed-certs-486436"
	I0723 15:25:36.505494   65177 addons.go:69] Setting default-storageclass=true in profile "embed-certs-486436"
	I0723 15:25:36.505509   65177 addons.go:234] Setting addon metrics-server=true in "embed-certs-486436"
	W0723 15:25:36.505518   65177 addons.go:243] addon metrics-server should already be in state true
	I0723 15:25:36.505535   65177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-486436"
	I0723 15:25:36.505541   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505487   65177 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-486436"
	W0723 15:25:36.505635   65177 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:25:36.505652   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505919   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505938   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505950   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505959   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.505987   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.506050   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.507034   65177 out.go:177] * Verifying Kubernetes components...
	I0723 15:25:36.508493   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:25:36.521500   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0723 15:25:36.521508   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0723 15:25:36.521836   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0723 15:25:36.522060   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522168   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522198   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522626   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522674   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522696   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522710   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522713   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522724   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.523009   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523043   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523309   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523454   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.523518   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523542   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.523629   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523665   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.527348   65177 addons.go:234] Setting addon default-storageclass=true in "embed-certs-486436"
	W0723 15:25:36.527370   65177 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:25:36.527399   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.527752   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.527784   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.540037   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0723 15:25:36.540208   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0723 15:25:36.540572   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.540689   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.541105   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541113   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541122   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541123   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541455   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541454   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541618   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.541686   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.543525   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.543999   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.545455   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0723 15:25:36.545800   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.545846   65177 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:25:36.545906   65177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:25:33.172857   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:33.188951   66641 api_server.go:72] duration metric: took 4m16.32591009s to wait for apiserver process to appear ...
	I0723 15:25:33.188979   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:33.189022   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:33.189077   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:33.228175   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.228204   66641 cri.go:89] found id: ""
	I0723 15:25:33.228213   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:33.228271   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.232451   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:33.232518   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:33.268343   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.268362   66641 cri.go:89] found id: ""
	I0723 15:25:33.268371   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:33.268426   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.272333   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:33.272388   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:33.305913   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.305936   66641 cri.go:89] found id: ""
	I0723 15:25:33.305945   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:33.305998   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.310500   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:33.310573   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:33.345773   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.345798   66641 cri.go:89] found id: ""
	I0723 15:25:33.345807   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:33.345872   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.350031   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:33.350084   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:33.383305   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.383331   66641 cri.go:89] found id: ""
	I0723 15:25:33.383341   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:33.383399   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.387279   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:33.387331   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:33.428442   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.428468   66641 cri.go:89] found id: ""
	I0723 15:25:33.428478   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:33.428676   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.432814   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:33.432879   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:33.469064   66641 cri.go:89] found id: ""
	I0723 15:25:33.469093   66641 logs.go:276] 0 containers: []
	W0723 15:25:33.469105   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:33.469112   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:33.469164   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:33.509131   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:33.509161   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.509168   66641 cri.go:89] found id: ""
	I0723 15:25:33.509177   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:33.509240   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.513478   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.517125   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:33.517152   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.554974   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:33.555004   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.606042   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:33.606074   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.648068   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:33.648100   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:33.698660   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:33.698690   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:33.797480   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:33.797508   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:33.812119   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:33.812146   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.863628   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:33.863661   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.913667   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:33.913695   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.949115   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:33.949144   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.988180   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:33.988205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:34.023679   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:34.023705   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:34.481829   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:34.481886   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:36.546218   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.546238   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.546607   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.547165   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.547209   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.547534   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:25:36.547548   65177 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:25:36.547565   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.547735   65177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.547752   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:25:36.547771   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.551130   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551764   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551767   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.551800   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551844   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551871   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.552160   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.552187   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552429   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552608   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552606   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.552797   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.567445   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0723 15:25:36.567912   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.568411   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.568432   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.568752   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.568949   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.570216   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.570524   65177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.570580   65177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:25:36.570620   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.572949   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573375   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.573402   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573509   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.573658   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.573787   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.573918   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.722640   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:25:36.756372   65177 node_ready.go:35] waiting up to 6m0s for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.779995   65177 node_ready.go:49] node "embed-certs-486436" has status "Ready":"True"
	I0723 15:25:36.780025   65177 node_ready.go:38] duration metric: took 23.62289ms for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.780039   65177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:36.807738   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.810749   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:36.820589   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:25:36.820613   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:25:36.880548   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:25:36.880581   65177 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:25:36.961807   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.962203   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:36.962229   65177 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:25:37.055123   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:37.148724   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.148749   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149038   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149096   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.149114   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.149123   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149412   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149432   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161152   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.161173   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.161477   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.161496   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161496   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.119897   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158050831s)
	I0723 15:25:38.120002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120022   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120358   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.120383   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.120399   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120361   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122012   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122234   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.122252   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.401938   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.346767402s)
	I0723 15:25:38.402002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402019   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402366   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402391   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402401   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402409   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402725   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.402738   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402762   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402773   65177 addons.go:475] Verifying addon metrics-server=true in "embed-certs-486436"
	I0723 15:25:38.404515   65177 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0723 15:25:36.154127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.155104   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.405850   65177 addons.go:510] duration metric: took 1.90047622s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0723 15:25:38.816969   65177 pod_ready.go:102] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:39.316609   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.316632   65177 pod_ready.go:81] duration metric: took 2.505858486s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.316642   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327865   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.327890   65177 pod_ready.go:81] duration metric: took 11.242778ms for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327900   65177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332886   65177 pod_ready.go:92] pod "etcd-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.332914   65177 pod_ready.go:81] duration metric: took 5.006846ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332925   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337166   65177 pod_ready.go:92] pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.337183   65177 pod_ready.go:81] duration metric: took 4.252609ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337198   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341748   65177 pod_ready.go:92] pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.341762   65177 pod_ready.go:81] duration metric: took 4.559215ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341771   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714214   65177 pod_ready.go:92] pod "kube-proxy-wzh4d" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.714237   65177 pod_ready.go:81] duration metric: took 372.459367ms for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714247   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114721   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:40.114744   65177 pod_ready.go:81] duration metric: took 400.490439ms for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114752   65177 pod_ready.go:38] duration metric: took 3.334700958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:40.114765   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:40.114821   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:40.130577   65177 api_server.go:72] duration metric: took 3.625254211s to wait for apiserver process to appear ...
	I0723 15:25:40.130607   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:40.130624   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:25:40.134690   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:25:40.135639   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:40.135658   65177 api_server.go:131] duration metric: took 5.04581ms to wait for apiserver health ...
	I0723 15:25:40.135665   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:40.318436   65177 system_pods.go:59] 9 kube-system pods found
	I0723 15:25:40.318466   65177 system_pods.go:61] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.318471   65177 system_pods.go:61] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.318474   65177 system_pods.go:61] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.318478   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.318481   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.318484   65177 system_pods.go:61] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.318487   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.318492   65177 system_pods.go:61] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.318497   65177 system_pods.go:61] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.318506   65177 system_pods.go:74] duration metric: took 182.836785ms to wait for pod list to return data ...
	I0723 15:25:40.318514   65177 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.514737   65177 default_sa.go:45] found service account: "default"
	I0723 15:25:40.514768   65177 default_sa.go:55] duration metric: took 196.245408ms for default service account to be created ...
	I0723 15:25:40.514779   65177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.718646   65177 system_pods.go:86] 9 kube-system pods found
	I0723 15:25:40.718675   65177 system_pods.go:89] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.718684   65177 system_pods.go:89] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.718690   65177 system_pods.go:89] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.718696   65177 system_pods.go:89] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.718702   65177 system_pods.go:89] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.718707   65177 system_pods.go:89] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.718713   65177 system_pods.go:89] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.718721   65177 system_pods.go:89] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.718728   65177 system_pods.go:89] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.718743   65177 system_pods.go:126] duration metric: took 203.95636ms to wait for k8s-apps to be running ...
	I0723 15:25:40.718756   65177 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.718809   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.733038   65177 system_svc.go:56] duration metric: took 14.275362ms WaitForService to wait for kubelet
	I0723 15:25:40.733069   65177 kubeadm.go:582] duration metric: took 4.227749087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.733088   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.914859   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.914886   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.914898   65177 node_conditions.go:105] duration metric: took 181.804872ms to run NodePressure ...
	I0723 15:25:40.914909   65177 start.go:241] waiting for startup goroutines ...
	I0723 15:25:40.914918   65177 start.go:246] waiting for cluster config update ...
	I0723 15:25:40.914932   65177 start.go:255] writing updated cluster config ...
	I0723 15:25:40.915235   65177 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:40.963735   65177 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:40.966048   65177 out.go:177] * Done! kubectl is now configured to use "embed-certs-486436" cluster and "default" namespace by default
	I0723 15:25:37.033161   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:25:37.039656   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:25:37.040745   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:37.040768   66641 api_server.go:131] duration metric: took 3.851781875s to wait for apiserver health ...
	I0723 15:25:37.040781   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:37.040807   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:37.040868   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:37.090495   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:37.090524   66641 cri.go:89] found id: ""
	I0723 15:25:37.090533   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:37.090608   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.094934   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:37.095005   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:37.138911   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.138937   66641 cri.go:89] found id: ""
	I0723 15:25:37.138947   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:37.139006   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.143876   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:37.143937   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:37.187419   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.187446   66641 cri.go:89] found id: ""
	I0723 15:25:37.187455   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:37.187514   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.191818   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:37.191896   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:37.232332   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:37.232358   66641 cri.go:89] found id: ""
	I0723 15:25:37.232366   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:37.232414   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.236718   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:37.236795   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:37.273231   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.273259   66641 cri.go:89] found id: ""
	I0723 15:25:37.273269   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:37.273339   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.279499   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:37.279575   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:37.316848   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.316867   66641 cri.go:89] found id: ""
	I0723 15:25:37.316875   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:37.316931   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.321920   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:37.321991   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:37.361804   66641 cri.go:89] found id: ""
	I0723 15:25:37.361833   66641 logs.go:276] 0 containers: []
	W0723 15:25:37.361844   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:37.361850   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:37.361909   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:37.401687   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:37.401715   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:37.401720   66641 cri.go:89] found id: ""
	I0723 15:25:37.401729   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:37.401788   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.406444   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.410788   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:37.410812   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:37.427033   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:37.427063   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:37.567851   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:37.567884   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.633966   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:37.634003   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.679663   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:37.679701   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.715046   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:37.715084   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.779870   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:37.779917   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:38.166491   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:38.166527   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:38.222592   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:38.222625   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:38.282823   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:38.282864   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:38.320076   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:38.320114   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:38.361845   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:38.361873   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:38.404791   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:38.404818   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:40.969345   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:25:40.969373   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.969378   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.969384   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.969388   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.969392   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.969395   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.969403   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.969407   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.969419   66641 system_pods.go:74] duration metric: took 3.928631967s to wait for pod list to return data ...
	I0723 15:25:40.969430   66641 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.971647   66641 default_sa.go:45] found service account: "default"
	I0723 15:25:40.971668   66641 default_sa.go:55] duration metric: took 2.232202ms for default service account to be created ...
	I0723 15:25:40.971675   66641 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.976760   66641 system_pods.go:86] 8 kube-system pods found
	I0723 15:25:40.976782   66641 system_pods.go:89] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.976787   66641 system_pods.go:89] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.976793   66641 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.976798   66641 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.976805   66641 system_pods.go:89] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.976809   66641 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.976818   66641 system_pods.go:89] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.976825   66641 system_pods.go:89] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.976832   66641 system_pods.go:126] duration metric: took 5.152102ms to wait for k8s-apps to be running ...
	I0723 15:25:40.976838   66641 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.976875   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.996951   66641 system_svc.go:56] duration metric: took 20.10286ms WaitForService to wait for kubelet
	I0723 15:25:40.996983   66641 kubeadm.go:582] duration metric: took 4m24.133944078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.997007   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.999958   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.999980   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.999991   66641 node_conditions.go:105] duration metric: took 2.97868ms to run NodePressure ...
	I0723 15:25:41.000002   66641 start.go:241] waiting for startup goroutines ...
	I0723 15:25:41.000008   66641 start.go:246] waiting for cluster config update ...
	I0723 15:25:41.000017   66641 start.go:255] writing updated cluster config ...
	I0723 15:25:41.000292   66641 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:41.058447   66641 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:41.060584   66641 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-911217" cluster and "default" namespace by default
	I0723 15:25:40.652692   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:42.653402   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:44.653499   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:47.153167   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:49.652723   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:51.653106   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:54.152382   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.153666   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.652308   64842 pod_ready.go:81] duration metric: took 4m0.005573507s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:56.652340   64842 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:56.652348   64842 pod_ready.go:38] duration metric: took 4m3.607231702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:56.652364   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:56.652389   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:56.652432   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:56.709002   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:56.709024   64842 cri.go:89] found id: ""
	I0723 15:25:56.709031   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:25:56.709076   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.713436   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:56.713496   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:56.748180   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:56.748203   64842 cri.go:89] found id: ""
	I0723 15:25:56.748212   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:25:56.748267   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.753878   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:56.753950   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:56.790420   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:56.790443   64842 cri.go:89] found id: ""
	I0723 15:25:56.790450   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:25:56.790503   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.794360   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:56.794430   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:56.833056   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:56.833084   64842 cri.go:89] found id: ""
	I0723 15:25:56.833093   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:25:56.833158   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.838040   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:56.838097   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:56.877548   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:56.877569   64842 cri.go:89] found id: ""
	I0723 15:25:56.877576   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:25:56.877620   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.881682   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:56.881754   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:56.931794   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:56.931821   64842 cri.go:89] found id: ""
	I0723 15:25:56.931831   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:25:56.931903   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.936454   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:56.936529   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:56.974347   64842 cri.go:89] found id: ""
	I0723 15:25:56.974373   64842 logs.go:276] 0 containers: []
	W0723 15:25:56.974401   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:56.974411   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:56.974595   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:57.008960   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:57.008986   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:25:57.008990   64842 cri.go:89] found id: ""
	I0723 15:25:57.008997   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:25:57.009044   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.013403   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.017022   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:57.017041   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:57.031010   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:57.031038   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:57.162515   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:25:57.162548   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:57.202805   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:25:57.202840   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:57.238593   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:57.238622   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:57.740811   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:25:57.740854   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:57.786125   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:57.786154   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:57.839346   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:25:57.839389   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:57.885507   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:25:57.885545   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:57.923025   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:25:57.923058   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:57.961082   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:25:57.961112   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:58.013561   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:25:58.013602   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:58.051695   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:25:58.051733   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.585802   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:26:00.601135   64842 api_server.go:72] duration metric: took 4m14.792155211s to wait for apiserver process to appear ...
	I0723 15:26:00.601167   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:26:00.601210   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:00.601269   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:00.641653   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:00.641678   64842 cri.go:89] found id: ""
	I0723 15:26:00.641687   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:00.641751   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.645831   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:00.645886   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:00.684737   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:00.684763   64842 cri.go:89] found id: ""
	I0723 15:26:00.684773   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:00.684836   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.689094   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:00.689140   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:00.725761   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:00.725787   64842 cri.go:89] found id: ""
	I0723 15:26:00.725795   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:00.725838   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.729843   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:00.729928   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:00.769870   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:00.769890   64842 cri.go:89] found id: ""
	I0723 15:26:00.769897   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:00.769942   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.774178   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:00.774235   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:00.816236   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:00.816261   64842 cri.go:89] found id: ""
	I0723 15:26:00.816268   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:00.816315   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.820577   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:00.820632   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:00.866824   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:00.866849   64842 cri.go:89] found id: ""
	I0723 15:26:00.866857   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:00.866910   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.871035   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:00.871089   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:00.913991   64842 cri.go:89] found id: ""
	I0723 15:26:00.914020   64842 logs.go:276] 0 containers: []
	W0723 15:26:00.914029   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:00.914035   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:00.914091   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:00.954766   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:00.954789   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.954795   64842 cri.go:89] found id: ""
	I0723 15:26:00.954804   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:00.954855   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.959067   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.962784   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:00.962807   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.998749   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:00.998781   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:01.454863   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:01.454902   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:01.505800   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:01.505829   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:01.555977   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:01.556008   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:01.591914   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:01.591942   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:01.649054   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:01.649083   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:01.682090   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:01.682116   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:01.721805   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:01.721832   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:01.758403   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:01.758432   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:01.808766   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:01.808803   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:01.823556   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:01.823589   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:01.936323   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:01.936355   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.478126   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:26:04.483667   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:26:04.484710   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:26:04.484730   64842 api_server.go:131] duration metric: took 3.883557615s to wait for apiserver health ...
	I0723 15:26:04.484737   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:26:04.484759   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:04.484810   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:04.522732   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:04.522757   64842 cri.go:89] found id: ""
	I0723 15:26:04.522766   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:04.522825   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.526922   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:04.526986   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:04.572736   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.572761   64842 cri.go:89] found id: ""
	I0723 15:26:04.572770   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:04.572828   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.576911   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:04.576966   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:04.612283   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.612310   64842 cri.go:89] found id: ""
	I0723 15:26:04.612318   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:04.612367   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.616609   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:04.616660   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:04.653775   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:04.653800   64842 cri.go:89] found id: ""
	I0723 15:26:04.653808   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:04.653883   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.658242   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:04.658298   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:04.699132   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.699155   64842 cri.go:89] found id: ""
	I0723 15:26:04.699164   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:04.699225   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.703672   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:04.703735   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:04.740522   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:04.740541   64842 cri.go:89] found id: ""
	I0723 15:26:04.740548   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:04.740605   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.745065   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:04.745134   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:04.779209   64842 cri.go:89] found id: ""
	I0723 15:26:04.779234   64842 logs.go:276] 0 containers: []
	W0723 15:26:04.779242   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:04.779255   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:04.779321   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:04.816696   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.816713   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:04.816718   64842 cri.go:89] found id: ""
	I0723 15:26:04.816728   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:04.816777   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.820775   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.824335   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:04.824362   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.865073   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:04.865105   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.903588   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:04.903617   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.939994   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:04.940022   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.976373   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:04.976402   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:05.355834   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:05.355877   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:05.410198   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:05.410228   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:05.424358   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:05.424391   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:05.464494   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:05.464526   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:05.496709   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:05.496736   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:05.534919   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:05.534959   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:05.640875   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:05.640913   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:05.678050   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:05.678078   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:08.236070   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:26:08.236336   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.236346   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.236351   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.236354   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.236357   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.236360   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.236368   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.236376   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.236382   64842 system_pods.go:74] duration metric: took 3.751640289s to wait for pod list to return data ...
	I0723 15:26:08.236391   64842 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:26:08.239339   64842 default_sa.go:45] found service account: "default"
	I0723 15:26:08.239367   64842 default_sa.go:55] duration metric: took 2.96931ms for default service account to be created ...
	I0723 15:26:08.239378   64842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:26:08.244406   64842 system_pods.go:86] 8 kube-system pods found
	I0723 15:26:08.244432   64842 system_pods.go:89] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.244438   64842 system_pods.go:89] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.244442   64842 system_pods.go:89] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.244447   64842 system_pods.go:89] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.244451   64842 system_pods.go:89] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.244455   64842 system_pods.go:89] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.244462   64842 system_pods.go:89] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.244468   64842 system_pods.go:89] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.244474   64842 system_pods.go:126] duration metric: took 5.091237ms to wait for k8s-apps to be running ...
	I0723 15:26:08.244481   64842 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:26:08.244521   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:08.260574   64842 system_svc.go:56] duration metric: took 16.083672ms WaitForService to wait for kubelet
	I0723 15:26:08.260610   64842 kubeadm.go:582] duration metric: took 4m22.451635049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:26:08.260634   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:26:08.263927   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:26:08.263954   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:26:08.263966   64842 node_conditions.go:105] duration metric: took 3.324706ms to run NodePressure ...
	I0723 15:26:08.263977   64842 start.go:241] waiting for startup goroutines ...
	I0723 15:26:08.263983   64842 start.go:246] waiting for cluster config update ...
	I0723 15:26:08.263992   64842 start.go:255] writing updated cluster config ...
	I0723 15:26:08.264250   64842 ssh_runner.go:195] Run: rm -f paused
	I0723 15:26:08.312776   64842 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0723 15:26:08.315009   64842 out.go:177] * Done! kubectl is now configured to use "no-preload-543029" cluster and "default" namespace by default
	I0723 15:26:54.925074   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:26:54.925180   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:26:54.926872   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:54.926940   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:54.927022   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:54.927137   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:54.927252   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:54.927339   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:54.929261   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:54.929337   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:54.929399   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:54.929472   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:54.929580   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:54.929678   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:54.929758   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:54.929836   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:54.929924   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:54.930026   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:54.930118   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:54.930165   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:54.930210   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:54.930257   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:54.930300   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:54.930371   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:54.930438   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:54.930535   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:54.930631   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:54.930663   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:54.930752   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:54.932218   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:54.932344   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:54.932445   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:54.932537   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:54.932653   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:54.932869   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:26:54.932943   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:26:54.933025   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933337   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933600   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933701   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933890   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933995   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934331   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934535   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934546   65605 kubeadm.go:310] 
	I0723 15:26:54.934600   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:26:54.934663   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:26:54.934673   65605 kubeadm.go:310] 
	I0723 15:26:54.934723   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:26:54.934762   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:26:54.934848   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:26:54.934855   65605 kubeadm.go:310] 
	I0723 15:26:54.934948   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:26:54.934979   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:26:54.935026   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:26:54.935034   65605 kubeadm.go:310] 
	I0723 15:26:54.935136   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:26:54.935255   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:26:54.935265   65605 kubeadm.go:310] 
	I0723 15:26:54.935410   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:26:54.935519   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:26:54.935578   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:26:54.935637   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:26:54.935693   65605 kubeadm.go:310] 
	W0723 15:26:54.935756   65605 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:26:54.935811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:26:55.388601   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:55.402519   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:26:55.412031   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:26:55.412054   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:26:55.412097   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:26:55.423092   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:26:55.423146   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:26:55.432321   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:26:55.441379   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:26:55.441447   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:26:55.450733   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.459263   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:26:55.459333   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.468488   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:26:55.477223   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:26:55.477277   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:26:55.485924   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:26:55.555024   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:55.555097   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:55.695658   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:55.695814   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:55.695939   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:55.867103   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:55.870203   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:55.870299   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:55.870407   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:55.870490   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:55.870568   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:55.870655   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:55.870733   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:55.870813   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:55.870861   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:55.870920   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:55.870985   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:55.871016   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:55.871063   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:55.963452   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:56.554450   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:57.109698   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:57.223533   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:57.243368   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:57.244331   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:57.244378   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:57.375340   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:57.377119   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:57.377234   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:57.386697   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:57.388552   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:57.389505   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:57.391792   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:27:37.394425   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:27:37.394534   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:37.394766   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:42.395393   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:42.395663   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:52.395847   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:52.396071   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:12.396192   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:12.396413   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395047   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:52.395369   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395384   65605 kubeadm.go:310] 
	I0723 15:28:52.395457   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:28:52.395531   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:28:52.395542   65605 kubeadm.go:310] 
	I0723 15:28:52.395588   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:28:52.395619   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:28:52.395780   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:28:52.395809   65605 kubeadm.go:310] 
	I0723 15:28:52.395964   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:28:52.396028   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:28:52.396084   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:28:52.396095   65605 kubeadm.go:310] 
	I0723 15:28:52.396194   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:28:52.396276   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:28:52.396286   65605 kubeadm.go:310] 
	I0723 15:28:52.396449   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:28:52.396552   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:28:52.396649   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:28:52.396744   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:28:52.396752   65605 kubeadm.go:310] 
	I0723 15:28:52.397220   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:28:52.397322   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:28:52.397397   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:28:52.397473   65605 kubeadm.go:394] duration metric: took 8m2.354906945s to StartCluster
	I0723 15:28:52.397516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:28:52.397573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:28:52.442298   65605 cri.go:89] found id: ""
	I0723 15:28:52.442328   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.442339   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:28:52.442347   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:28:52.442422   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:28:52.476108   65605 cri.go:89] found id: ""
	I0723 15:28:52.476131   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.476138   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:28:52.476144   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:28:52.476205   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:28:52.511118   65605 cri.go:89] found id: ""
	I0723 15:28:52.511143   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.511152   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:28:52.511159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:28:52.511224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:28:52.544901   65605 cri.go:89] found id: ""
	I0723 15:28:52.544934   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.544946   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:28:52.544954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:28:52.545020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:28:52.580472   65605 cri.go:89] found id: ""
	I0723 15:28:52.580494   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.580501   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:28:52.580515   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:28:52.580577   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:28:52.613777   65605 cri.go:89] found id: ""
	I0723 15:28:52.613808   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.613818   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:28:52.613826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:28:52.613894   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:28:52.650831   65605 cri.go:89] found id: ""
	I0723 15:28:52.650961   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.650974   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:28:52.650982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:28:52.651048   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:28:52.684805   65605 cri.go:89] found id: ""
	I0723 15:28:52.684833   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.684845   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:28:52.684857   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:28:52.684873   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:28:52.787532   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:28:52.787583   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:28:52.843947   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:28:52.843979   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:28:52.894679   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:28:52.894714   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:28:52.910794   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:28:52.910821   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:28:52.989285   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0723 15:28:52.989325   65605 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:28:52.989368   65605 out.go:239] * 
	W0723 15:28:52.989432   65605 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.989465   65605 out.go:239] * 
	W0723 15:28:52.990350   65605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:28:52.993770   65605 out.go:177] 
	W0723 15:28:52.995023   65605 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.995076   65605 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:28:52.995095   65605 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:28:52.996528   65605 out.go:177] 
	
	
	==> CRI-O <==
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.468279619Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9qcfs,Uid:663c125b-bed4-4622-8f0c-ff7837073bbd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748082470167767,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:14.576380414Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1721748082467583864,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:14.576373757Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f99cf418866d4b5e9a536bd7c44e94811a6842058a473f72717f75f333ba0c1d,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-mkl8l,Uid:9e129e04-b1b8-47e8-9c07-20cdc89705e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748079659538073,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-mkl8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e129e04-b1b8-47e8-9c07-20cdc89705e4,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23
T15:21:14.576372389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&PodSandboxMetadata{Name:kube-proxy-d4zwd,Uid:55082c05-5fee-4c2a-ab31-897d838164d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748074895503073,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5fee-4c2a-ab31-897d838164d0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:14.576376497Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8a893464-6a36-4a91-9dde-8cb58d7dcfa8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748074890512965,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-07-23T15:21:14.576379245Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-911217,Uid:0147c985073f7215a7c36182709521e5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748070103110093,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521e5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.64:8444,kubernetes.io/config.hash: 0147c985073f7215a7c36182709521e5,kubernetes.io/config.seen: 2024-07-23T15:21:09.579384373Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e7
5,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-911217,Uid:429546dbaed2c01c11bb28a15be2d102,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748070090682140,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.64:2379,kubernetes.io/config.hash: 429546dbaed2c01c11bb28a15be2d102,kubernetes.io/config.seen: 2024-07-23T15:21:09.638296680Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-911217,Uid:aef3b8c85bbf0ed67c3c9d628e2d961e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748070087700034,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9d628e2d961e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aef3b8c85bbf0ed67c3c9d628e2d961e,kubernetes.io/config.seen: 2024-07-23T15:21:09.579389977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-911217,Uid:3cc21cdd18d25fadf0e2d43494d5ec86,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748070085954522,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5ec86,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 3cc21cdd18d25fadf0e2d43494d5ec86,kubernetes.io/config.seen: 2024-07-23T15:21:09.579388930Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c52074cf-d949-425d-8f39-675de6579b87 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.469271376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=152ec673-5f1c-408d-a8a8-f08d646e9328 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.469432328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=152ec673-5f1c-408d-a8a8-f08d646e9328 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.469677460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=152ec673-5f1c-408d-a8a8-f08d646e9328 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.473270860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=daf44529-70e9-4039-b7fc-db118fdc2594 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.473432192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=daf44529-70e9-4039-b7fc-db118fdc2594 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.479427527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3aa3a80-3ef0-4687-bd0d-e02eb900c70b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.479882944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748883479860035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3aa3a80-3ef0-4687-bd0d-e02eb900c70b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.480450872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fdd0b32-32c2-44e6-b725-08ad365766b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.480533026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fdd0b32-32c2-44e6-b725-08ad365766b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.480843816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fdd0b32-32c2-44e6-b725-08ad365766b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.520862439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fed04578-93b7-4aa5-a8a0-fee1cf8bda91 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.520975610Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fed04578-93b7-4aa5-a8a0-fee1cf8bda91 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.522171674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38ebeb43-2300-44a9-a20e-750bd44d2614 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.522776009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748883522740975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38ebeb43-2300-44a9-a20e-750bd44d2614 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.523582192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68bc6474-f911-4fe9-a811-55b83ba3a404 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.523679097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68bc6474-f911-4fe9-a811-55b83ba3a404 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.524034802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68bc6474-f911-4fe9-a811-55b83ba3a404 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.569028571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c32a83b-a991-4c23-9f28-bf54cd039487 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.569119364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c32a83b-a991-4c23-9f28-bf54cd039487 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.570372698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cb00772-ccd3-4993-b831-90c33acae08f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.570954359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748883570929915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cb00772-ccd3-4993-b831-90c33acae08f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.571402830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad3e662d-aa29-4045-a20f-f65d521e1962 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.571571383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad3e662d-aa29-4045-a20f-f65d521e1962 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:34:43 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:34:43.571784975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad3e662d-aa29-4045-a20f-f65d521e1962 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68672c3e7b7b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   ca8fdb1501073       storage-provisioner
	b9d80bea625fd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   78a22f7d4c71b       busybox
	b58d38beb8d00       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   b275707ca1bdc       coredns-7db6d8ff4d-9qcfs
	48a478b951b42       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   e085ea6e5fe2e       kube-proxy-d4zwd
	01a650a53706b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   ca8fdb1501073       storage-provisioner
	9ac0a72e37831       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   5b96d807e7924       kube-scheduler-default-k8s-diff-port-911217
	e73340ee36d2f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   bbb44bef6c4ae       etcd-default-k8s-diff-port-911217
	bcc1ca16d82a0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   914b892d84f87       kube-controller-manager-default-k8s-diff-port-911217
	96e46e540ab2c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   928ac961f34d1       kube-apiserver-default-k8s-diff-port-911217
	
	
	==> coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35430 - 47338 "HINFO IN 3073176849920810953.3099087362000300018. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009793717s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-911217
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-911217
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=default-k8s-diff-port-911217
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_15_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:15:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-911217
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:34:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:31:56 +0000   Tue, 23 Jul 2024 15:15:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:31:56 +0000   Tue, 23 Jul 2024 15:15:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:31:56 +0000   Tue, 23 Jul 2024 15:15:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:31:56 +0000   Tue, 23 Jul 2024 15:21:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.64
	  Hostname:    default-k8s-diff-port-911217
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c57467d256054452b1a17d665265bdd8
	  System UUID:                c57467d2-5605-4452-b1a1-7d665265bdd8
	  Boot ID:                    a16276a0-e176-4523-9c31-de84f88a7ebc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-9qcfs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-911217                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-911217             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-911217    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-d4zwd                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-911217             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-569cc877fc-mkl8l                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-911217 event: Registered Node default-k8s-diff-port-911217 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-911217 event: Registered Node default-k8s-diff-port-911217 in Controller
	
	
	==> dmesg <==
	[Jul23 15:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055514] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048807] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.925004] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.919129] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581648] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul23 15:21] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.054667] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064954] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.208133] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.129051] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.302029] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.259904] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.058985] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.143250] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.583923] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.975582] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +5.537445] kauditd_printk_skb: 78 callbacks suppressed
	[ +23.420899] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] <==
	{"level":"info","ts":"2024-07-23T15:21:12.4882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4594cd905cc0f18 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T15:21:12.488257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4594cd905cc0f18 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:21:12.488314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4594cd905cc0f18 received MsgPreVoteResp from b4594cd905cc0f18 at term 2"}
	{"level":"info","ts":"2024-07-23T15:21:12.48833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4594cd905cc0f18 became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:12.488336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4594cd905cc0f18 received MsgVoteResp from b4594cd905cc0f18 at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:12.488344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4594cd905cc0f18 became leader at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:12.488353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4594cd905cc0f18 elected leader b4594cd905cc0f18 at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:12.501178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:21:12.501137Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b4594cd905cc0f18","local-member-attributes":"{Name:default-k8s-diff-port-911217 ClientURLs:[https://192.168.61.64:2379]}","request-path":"/0/members/b4594cd905cc0f18/attributes","cluster-id":"a20e7ca2a4f7396b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:21:12.502012Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:21:12.502217Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:21:12.50223Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:21:12.503054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T15:21:12.503654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.64:2379"}
	{"level":"info","ts":"2024-07-23T15:21:27.253298Z","caller":"traceutil/trace.go:171","msg":"trace[2004434959] linearizableReadLoop","detail":"{readStateIndex:566; appliedIndex:565; }","duration":"174.107324ms","start":"2024-07-23T15:21:27.079151Z","end":"2024-07-23T15:21:27.253259Z","steps":["trace[2004434959] 'read index received'  (duration: 173.106707ms)","trace[2004434959] 'applied index is now lower than readState.Index'  (duration: 999.821µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T15:21:27.253535Z","caller":"traceutil/trace.go:171","msg":"trace[1069394220] transaction","detail":"{read_only:false; response_revision:533; number_of_response:1; }","duration":"176.831798ms","start":"2024-07-23T15:21:27.076687Z","end":"2024-07-23T15:21:27.253519Z","steps":["trace[1069394220] 'process raft request'  (duration: 175.507545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:21:27.253952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.765085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2024-07-23T15:21:27.254037Z","caller":"traceutil/trace.go:171","msg":"trace[1313712765] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:533; }","duration":"174.916088ms","start":"2024-07-23T15:21:27.079108Z","end":"2024-07-23T15:21:27.254024Z","steps":["trace[1313712765] 'agreement among raft nodes before linearized reading'  (duration: 174.672347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:21:27.254264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.320368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-911217\" ","response":"range_response_count:1 size:5636"}
	{"level":"info","ts":"2024-07-23T15:21:27.254325Z","caller":"traceutil/trace.go:171","msg":"trace[1503783150] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-911217; range_end:; response_count:1; response_revision:533; }","duration":"164.401499ms","start":"2024-07-23T15:21:27.089913Z","end":"2024-07-23T15:21:27.254315Z","steps":["trace[1503783150] 'agreement among raft nodes before linearized reading'  (duration: 164.306162ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:21:27.254601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.128106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-07-23T15:21:27.25466Z","caller":"traceutil/trace.go:171","msg":"trace[924220002] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:533; }","duration":"132.205764ms","start":"2024-07-23T15:21:27.122447Z","end":"2024-07-23T15:21:27.254653Z","steps":["trace[924220002] 'agreement among raft nodes before linearized reading'  (duration: 132.123828ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:31:12.528485Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":794}
	{"level":"info","ts":"2024-07-23T15:31:12.538405Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":794,"took":"9.16197ms","hash":115489396,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-23T15:31:12.538501Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":115489396,"revision":794,"compact-revision":-1}
	
	
	==> kernel <==
	 15:34:43 up 13 min,  0 users,  load average: 0.04, 0.08, 0.07
	Linux default-k8s-diff-port-911217 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] <==
	I0723 15:29:14.840132       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:31:13.841535       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:31:13.841652       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0723 15:31:14.842168       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:31:14.842296       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:31:14.842331       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:31:14.842207       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:31:14.842447       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:31:14.843360       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:32:14.843408       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:32:14.843504       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:32:14.843522       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:32:14.843667       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:32:14.843829       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:32:14.845602       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:34:14.844367       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:34:14.844476       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:34:14.844520       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:34:14.846633       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:34:14.846762       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:34:14.846835       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] <==
	I0723 15:28:57.840756       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:29:27.400070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:29:27.848877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:29:57.404364       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:29:57.857464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:30:27.411435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:30:27.864606       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:30:57.415426       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:30:57.871375       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:31:27.420838       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:31:27.880736       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:31:57.424923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:31:57.888776       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:32:20.672470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="349.947µs"
	E0723 15:32:27.430007       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:32:27.895310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:32:31.668234       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="165.913µs"
	E0723 15:32:57.434935       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:32:57.902754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:33:27.440517       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:33:27.911241       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:33:57.445343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:33:57.918895       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:34:27.449697       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:34:27.926331       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] <==
	I0723 15:21:15.173901       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:21:15.182637       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.64"]
	I0723 15:21:15.235902       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 15:21:15.235939       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 15:21:15.235969       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:21:15.241153       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:21:15.243986       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:21:15.244158       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:15.247867       1 config.go:192] "Starting service config controller"
	I0723 15:21:15.247929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:21:15.247987       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:21:15.248015       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:21:15.248758       1 config.go:319] "Starting node config controller"
	I0723 15:21:15.250088       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:21:15.349138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:21:15.349211       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:21:15.350922       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] <==
	I0723 15:21:11.281031       1 serving.go:380] Generated self-signed cert in-memory
	W0723 15:21:13.815272       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 15:21:13.815400       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 15:21:13.815441       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 15:21:13.815477       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 15:21:13.837439       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 15:21:13.838843       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:13.840623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 15:21:13.844554       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 15:21:13.844616       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 15:21:13.847082       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:21:13.949598       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 15:32:09 default-k8s-diff-port-911217 kubelet[935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:32:09 default-k8s-diff-port-911217 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:32:09 default-k8s-diff-port-911217 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:32:20 default-k8s-diff-port-911217 kubelet[935]: E0723 15:32:20.653371     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:32:31 default-k8s-diff-port-911217 kubelet[935]: E0723 15:32:31.653657     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:32:42 default-k8s-diff-port-911217 kubelet[935]: E0723 15:32:42.653837     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:32:57 default-k8s-diff-port-911217 kubelet[935]: E0723 15:32:57.653024     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:33:09 default-k8s-diff-port-911217 kubelet[935]: E0723 15:33:09.653336     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:33:09 default-k8s-diff-port-911217 kubelet[935]: E0723 15:33:09.671389     935 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:33:09 default-k8s-diff-port-911217 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:33:09 default-k8s-diff-port-911217 kubelet[935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:33:09 default-k8s-diff-port-911217 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:33:09 default-k8s-diff-port-911217 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:33:20 default-k8s-diff-port-911217 kubelet[935]: E0723 15:33:20.652987     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:33:31 default-k8s-diff-port-911217 kubelet[935]: E0723 15:33:31.655574     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:33:44 default-k8s-diff-port-911217 kubelet[935]: E0723 15:33:44.653394     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:33:56 default-k8s-diff-port-911217 kubelet[935]: E0723 15:33:56.653591     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:34:09 default-k8s-diff-port-911217 kubelet[935]: E0723 15:34:09.653377     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:34:09 default-k8s-diff-port-911217 kubelet[935]: E0723 15:34:09.671652     935 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:34:09 default-k8s-diff-port-911217 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:34:09 default-k8s-diff-port-911217 kubelet[935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:34:09 default-k8s-diff-port-911217 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:34:09 default-k8s-diff-port-911217 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:34:21 default-k8s-diff-port-911217 kubelet[935]: E0723 15:34:21.652705     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:34:36 default-k8s-diff-port-911217 kubelet[935]: E0723 15:34:36.654367     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	
	
	==> storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] <==
	I0723 15:21:15.120730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0723 15:21:45.125492       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] <==
	I0723 15:21:45.948925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 15:21:45.960155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 15:21:45.960310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 15:21:45.972149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 15:21:45.972756       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdbe9b7-9c70-4aaf-9bed-7816d87777fa", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-911217_b01d1de4-13f2-47ea-a9a9-a1c2c8db6efc became leader
	I0723 15:21:45.972969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-911217_b01d1de4-13f2-47ea-a9a9-a1c2c8db6efc!
	I0723 15:21:46.073617       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-911217_b01d1de4-13f2-47ea-a9a9-a1c2c8db6efc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mkl8l
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 describe pod metrics-server-569cc877fc-mkl8l
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-911217 describe pod metrics-server-569cc877fc-mkl8l: exit status 1 (62.038559ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mkl8l" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-911217 describe pod metrics-server-569cc877fc-mkl8l: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (545.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0723 15:27:11.819372   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-543029 -n no-preload-543029
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-23 15:35:08.829685151 +0000 UTC m=+5918.035429869
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-543029 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-543029 logs -n 25: (2.051725685s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-193974                              | stopped-upgrade-193974       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-543029             | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-543029                                   | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-486436            | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272        | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-518198 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | disable-driver-mounts-518198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:18:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:18:41.988416   66641 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:18:41.988512   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988520   66641 out.go:304] Setting ErrFile to fd 2...
	I0723 15:18:41.988525   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988683   66641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:18:41.989181   66641 out.go:298] Setting JSON to false
	I0723 15:18:41.990049   66641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7268,"bootTime":1721740654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:18:41.990101   66641 start.go:139] virtualization: kvm guest
	I0723 15:18:41.992106   66641 out.go:177] * [default-k8s-diff-port-911217] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:18:41.993366   66641 notify.go:220] Checking for updates...
	I0723 15:18:41.993387   66641 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:18:41.994650   66641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:18:41.995849   66641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:18:41.997045   66641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:18:41.998236   66641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:18:41.999412   66641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:18:42.001155   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:18:42.001533   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.001596   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.016186   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0723 15:18:42.016616   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.017209   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.017230   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.017528   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.017699   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.017927   66641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:18:42.018205   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.018235   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.032467   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0723 15:18:42.032800   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.033214   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.033236   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.033544   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.033718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.065773   66641 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:18:42.067127   66641 start.go:297] selected driver: kvm2
	I0723 15:18:42.067142   66641 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.067236   66641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:18:42.067871   66641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.067939   66641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:18:42.083220   66641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:18:42.083563   66641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:18:42.083627   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:18:42.083641   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:18:42.083677   66641 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.083772   66641 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.085608   66641 out.go:177] * Starting "default-k8s-diff-port-911217" primary control-plane node in "default-k8s-diff-port-911217" cluster
	I0723 15:18:42.394642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:42.086917   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:18:42.086954   66641 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:18:42.086961   66641 cache.go:56] Caching tarball of preloaded images
	I0723 15:18:42.087024   66641 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:18:42.087034   66641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:18:42.087125   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:18:42.087294   66641 start.go:360] acquireMachinesLock for default-k8s-diff-port-911217: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:18:45.466731   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:51.546673   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:54.618775   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:00.698667   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:03.770734   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:09.850627   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:12.922681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:19.002679   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:22.074678   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:28.154680   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:31.226704   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:37.306625   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:40.378652   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:46.458657   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:49.530693   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:55.610642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:58.682681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:20:01.686613   65177 start.go:364] duration metric: took 4m13.413067096s to acquireMachinesLock for "embed-certs-486436"
	I0723 15:20:01.686692   65177 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:01.686702   65177 fix.go:54] fixHost starting: 
	I0723 15:20:01.687041   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:01.687070   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:01.702700   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0723 15:20:01.703107   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:01.703623   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:20:01.703649   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:01.704019   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:01.704222   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:01.704417   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:20:01.706547   65177 fix.go:112] recreateIfNeeded on embed-certs-486436: state=Stopped err=<nil>
	I0723 15:20:01.706583   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	W0723 15:20:01.706810   65177 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:01.708411   65177 out.go:177] * Restarting existing kvm2 VM for "embed-certs-486436" ...
	I0723 15:20:01.709393   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Start
	I0723 15:20:01.709559   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring networks are active...
	I0723 15:20:01.710353   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network default is active
	I0723 15:20:01.710733   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network mk-embed-certs-486436 is active
	I0723 15:20:01.711060   65177 main.go:141] libmachine: (embed-certs-486436) Getting domain xml...
	I0723 15:20:01.711832   65177 main.go:141] libmachine: (embed-certs-486436) Creating domain...
	I0723 15:20:02.915930   65177 main.go:141] libmachine: (embed-certs-486436) Waiting to get IP...
	I0723 15:20:02.916770   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:02.917115   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:02.917188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:02.917097   66959 retry.go:31] will retry after 245.483954ms: waiting for machine to come up
	I0723 15:20:01.683920   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:01.683992   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684333   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:20:01.684360   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684537   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:20:01.686489   64842 machine.go:97] duration metric: took 4m34.539799868s to provisionDockerMachine
	I0723 15:20:01.686530   64842 fix.go:56] duration metric: took 4m34.563243323s for fixHost
	I0723 15:20:01.686547   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 4m34.563294357s
	W0723 15:20:01.686572   64842 start.go:714] error starting host: provision: host is not running
	W0723 15:20:01.686657   64842 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0723 15:20:01.686668   64842 start.go:729] Will try again in 5 seconds ...
	I0723 15:20:03.164587   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.165021   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.165067   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.164972   66959 retry.go:31] will retry after 387.950176ms: waiting for machine to come up
	I0723 15:20:03.554705   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.555161   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.555188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.555103   66959 retry.go:31] will retry after 404.807138ms: waiting for machine to come up
	I0723 15:20:03.961830   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.962290   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.962323   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.962236   66959 retry.go:31] will retry after 570.61318ms: waiting for machine to come up
	I0723 15:20:04.534152   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:04.534702   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:04.534731   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:04.534650   66959 retry.go:31] will retry after 542.857217ms: waiting for machine to come up
	I0723 15:20:05.079445   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.079866   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.079894   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.079811   66959 retry.go:31] will retry after 653.88428ms: waiting for machine to come up
	I0723 15:20:05.735919   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.736350   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.736381   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.736331   66959 retry.go:31] will retry after 871.798617ms: waiting for machine to come up
	I0723 15:20:06.609428   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:06.609885   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:06.609908   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:06.609854   66959 retry.go:31] will retry after 1.079464189s: waiting for machine to come up
	I0723 15:20:07.690706   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:07.691096   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:07.691122   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:07.691070   66959 retry.go:31] will retry after 1.414145571s: waiting for machine to come up
	I0723 15:20:06.687299   64842 start.go:360] acquireMachinesLock for no-preload-543029: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:20:09.107698   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:09.108062   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:09.108091   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:09.108012   66959 retry.go:31] will retry after 2.263313118s: waiting for machine to come up
	I0723 15:20:11.374573   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:11.375009   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:11.375035   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:11.374970   66959 retry.go:31] will retry after 2.600297505s: waiting for machine to come up
	I0723 15:20:13.978265   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:13.978707   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:13.978733   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:13.978653   66959 retry.go:31] will retry after 2.515380756s: waiting for machine to come up
	I0723 15:20:16.497458   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:16.497913   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:16.497945   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:16.497872   66959 retry.go:31] will retry after 3.863044954s: waiting for machine to come up
	I0723 15:20:21.587107   65605 start.go:364] duration metric: took 3m54.633068774s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:20:21.587168   65605 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:21.587179   65605 fix.go:54] fixHost starting: 
	I0723 15:20:21.587596   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:21.587632   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:21.608083   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0723 15:20:21.608563   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:21.609109   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:20:21.609148   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:21.609463   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:21.609679   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:21.609839   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:20:21.611555   65605 fix.go:112] recreateIfNeeded on old-k8s-version-000272: state=Stopped err=<nil>
	I0723 15:20:21.611590   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	W0723 15:20:21.611766   65605 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:21.614168   65605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	I0723 15:20:21.615607   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .Start
	I0723 15:20:21.615831   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:20:21.616640   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:20:21.617122   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:20:21.617591   65605 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:20:21.618346   65605 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:20:20.365141   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365653   65177 main.go:141] libmachine: (embed-certs-486436) Found IP for machine: 192.168.39.200
	I0723 15:20:20.365671   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365677   65177 main.go:141] libmachine: (embed-certs-486436) Reserving static IP address...
	I0723 15:20:20.366319   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.366340   65177 main.go:141] libmachine: (embed-certs-486436) DBG | skip adding static IP to network mk-embed-certs-486436 - found existing host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"}
	I0723 15:20:20.366351   65177 main.go:141] libmachine: (embed-certs-486436) Reserved static IP address: 192.168.39.200
	I0723 15:20:20.366360   65177 main.go:141] libmachine: (embed-certs-486436) Waiting for SSH to be available...
	I0723 15:20:20.366367   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Getting to WaitForSSH function...
	I0723 15:20:20.368870   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369217   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.369239   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369431   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH client type: external
	I0723 15:20:20.369462   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa (-rw-------)
	I0723 15:20:20.369485   65177 main.go:141] libmachine: (embed-certs-486436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:20.369495   65177 main.go:141] libmachine: (embed-certs-486436) DBG | About to run SSH command:
	I0723 15:20:20.369505   65177 main.go:141] libmachine: (embed-certs-486436) DBG | exit 0
	I0723 15:20:20.494158   65177 main.go:141] libmachine: (embed-certs-486436) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:20.494591   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetConfigRaw
	I0723 15:20:20.495255   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.497821   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498094   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.498124   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498346   65177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/config.json ...
	I0723 15:20:20.498558   65177 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:20.498577   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:20.498808   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.500819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501138   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.501166   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.501481   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501643   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501770   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.501926   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.502215   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.502231   65177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:20.606234   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:20.606264   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606556   65177 buildroot.go:166] provisioning hostname "embed-certs-486436"
	I0723 15:20:20.606598   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606793   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.609446   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609801   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.609838   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609990   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.610137   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610468   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.610650   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.610813   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.610825   65177 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-486436 && echo "embed-certs-486436" | sudo tee /etc/hostname
	I0723 15:20:20.727215   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-486436
	
	I0723 15:20:20.727239   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.730058   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730363   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.730411   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730552   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.730741   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.730911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.731048   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.731204   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.731364   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.731380   65177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-486436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-486436/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-486436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:20.844079   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:20.844109   65177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:20.844128   65177 buildroot.go:174] setting up certificates
	I0723 15:20:20.844135   65177 provision.go:84] configureAuth start
	I0723 15:20:20.844145   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.844400   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.846867   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847192   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.847220   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847342   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.849457   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849786   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.849829   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849937   65177 provision.go:143] copyHostCerts
	I0723 15:20:20.849992   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:20.850002   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:20.850068   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:20.850164   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:20.850172   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:20.850201   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:20.850263   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:20.850272   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:20.850293   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:20.850358   65177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.embed-certs-486436 san=[127.0.0.1 192.168.39.200 embed-certs-486436 localhost minikube]
	I0723 15:20:20.945454   65177 provision.go:177] copyRemoteCerts
	I0723 15:20:20.945511   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:20.945536   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.948316   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948605   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.948639   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948797   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.948981   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.949142   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.949267   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.032367   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:20:21.054529   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:21.076049   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 15:20:21.098274   65177 provision.go:87] duration metric: took 254.126202ms to configureAuth
	I0723 15:20:21.098303   65177 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:21.098510   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:20:21.098600   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.100971   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101307   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.101341   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101520   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.101687   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.101828   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.102031   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.102187   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.102375   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.102418   65177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:21.359179   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:21.359214   65177 machine.go:97] duration metric: took 860.640697ms to provisionDockerMachine
	I0723 15:20:21.359230   65177 start.go:293] postStartSetup for "embed-certs-486436" (driver="kvm2")
	I0723 15:20:21.359244   65177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:21.359265   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.359777   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:21.359804   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.362611   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.362936   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.362963   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.363138   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.363311   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.363497   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.363669   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.444572   65177 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:21.448633   65177 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:21.448662   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:21.448733   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:21.448817   65177 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:21.448925   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:21.457699   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:21.480387   65177 start.go:296] duration metric: took 121.140622ms for postStartSetup
	I0723 15:20:21.480431   65177 fix.go:56] duration metric: took 19.793728867s for fixHost
	I0723 15:20:21.480449   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.483324   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483667   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.483690   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483854   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.484057   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484211   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484353   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.484516   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.484692   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.484703   65177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:21.586960   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748021.563549452
	
	I0723 15:20:21.586982   65177 fix.go:216] guest clock: 1721748021.563549452
	I0723 15:20:21.586989   65177 fix.go:229] Guest: 2024-07-23 15:20:21.563549452 +0000 UTC Remote: 2024-07-23 15:20:21.480435025 +0000 UTC m=+273.351160165 (delta=83.114427ms)
	I0723 15:20:21.587010   65177 fix.go:200] guest clock delta is within tolerance: 83.114427ms
	I0723 15:20:21.587016   65177 start.go:83] releasing machines lock for "embed-certs-486436", held for 19.900344761s
	I0723 15:20:21.587045   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.587363   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:21.590600   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.590998   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.591041   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.591194   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591723   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591965   65177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:21.592024   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.592172   65177 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:21.592190   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.594877   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595266   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595337   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595387   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595502   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595698   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.595751   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595776   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595837   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.595909   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595998   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.596083   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.596218   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.596369   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.709871   65177 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:21.717210   65177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:21.866461   65177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:21.871904   65177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:21.871979   65177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:21.888197   65177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:21.888226   65177 start.go:495] detecting cgroup driver to use...
	I0723 15:20:21.888339   65177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:21.903857   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:21.917841   65177 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:21.917917   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:21.935814   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:21.949898   65177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:22.066137   65177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:22.208517   65177 docker.go:233] disabling docker service ...
	I0723 15:20:22.208606   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:22.222583   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:22.235322   65177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:22.380324   65177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:22.513404   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:22.529676   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:22.546980   65177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:20:22.547050   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.556656   65177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:22.556723   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.566410   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.576269   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.586125   65177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:22.597824   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.608136   65177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.628391   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.642862   65177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:22.652564   65177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:22.652625   65177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:22.667485   65177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:22.677669   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:22.809762   65177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:22.947870   65177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:22.947955   65177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:22.952570   65177 start.go:563] Will wait 60s for crictl version
	I0723 15:20:22.952672   65177 ssh_runner.go:195] Run: which crictl
	I0723 15:20:22.956658   65177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:22.997591   65177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:22.997719   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.030830   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.060406   65177 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:20:23.061617   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:23.065154   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065547   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:23.065572   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065845   65177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:23.070019   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:23.082226   65177 kubeadm.go:883] updating cluster {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:23.082414   65177 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:20:23.082490   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:23.117427   65177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:20:23.117505   65177 ssh_runner.go:195] Run: which lz4
	I0723 15:20:23.121380   65177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:23.125694   65177 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:23.125721   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:20:22.904910   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:20:22.905969   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:22.906448   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:22.906508   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:22.906424   67094 retry.go:31] will retry after 215.638875ms: waiting for machine to come up
	I0723 15:20:23.124008   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.124474   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.124510   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.124440   67094 retry.go:31] will retry after 380.753429ms: waiting for machine to come up
	I0723 15:20:23.507362   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.507777   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.507803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.507744   67094 retry.go:31] will retry after 385.253161ms: waiting for machine to come up
	I0723 15:20:23.894227   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.894675   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.894697   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.894627   67094 retry.go:31] will retry after 533.715559ms: waiting for machine to come up
	I0723 15:20:24.429811   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:24.430290   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:24.430321   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:24.430242   67094 retry.go:31] will retry after 637.033889ms: waiting for machine to come up
	I0723 15:20:25.068770   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.069313   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.069345   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.069274   67094 retry.go:31] will retry after 796.484567ms: waiting for machine to come up
	I0723 15:20:25.867223   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.867663   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.867693   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.867604   67094 retry.go:31] will retry after 845.920319ms: waiting for machine to come up
	I0723 15:20:26.715077   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:26.715612   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:26.715643   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:26.715566   67094 retry.go:31] will retry after 1.265268276s: waiting for machine to come up
	I0723 15:20:24.399306   65177 crio.go:462] duration metric: took 1.277970642s to copy over tarball
	I0723 15:20:24.399409   65177 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:26.603797   65177 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204354868s)
	I0723 15:20:26.603830   65177 crio.go:469] duration metric: took 2.204493799s to extract the tarball
	I0723 15:20:26.603839   65177 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:26.641498   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:26.682771   65177 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:20:26.682793   65177 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:20:26.682802   65177 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0723 15:20:26.682948   65177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-486436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:26.683021   65177 ssh_runner.go:195] Run: crio config
	I0723 15:20:26.734908   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:26.734934   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:26.734947   65177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:26.734979   65177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-486436 NodeName:embed-certs-486436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:20:26.735162   65177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-486436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:26.735247   65177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:20:26.746266   65177 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:26.746334   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:26.756387   65177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0723 15:20:26.771870   65177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:26.789639   65177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0723 15:20:26.807608   65177 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:26.811134   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:26.823851   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:26.952899   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:26.969453   65177 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436 for IP: 192.168.39.200
	I0723 15:20:26.969484   65177 certs.go:194] generating shared ca certs ...
	I0723 15:20:26.969503   65177 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:26.969694   65177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:26.969757   65177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:26.969770   65177 certs.go:256] generating profile certs ...
	I0723 15:20:26.969897   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/client.key
	I0723 15:20:26.969978   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key.8481dffb
	I0723 15:20:26.970038   65177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key
	I0723 15:20:26.970164   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:26.970203   65177 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:26.970216   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:26.970255   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:26.970279   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:26.970309   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:26.970369   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:26.971269   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:27.026302   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:27.075563   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:27.109194   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:27.136748   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0723 15:20:27.159391   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:20:27.181933   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:27.203549   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:27.225473   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:27.254497   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:27.275874   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:27.299275   65177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:27.316223   65177 ssh_runner.go:195] Run: openssl version
	I0723 15:20:27.322037   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:27.333546   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337890   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337945   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.343624   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:27.354738   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:27.365915   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370038   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370101   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.375514   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:27.386502   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:27.396611   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400879   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400978   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.406132   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:27.415738   65177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:27.419755   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:27.424982   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:27.430277   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:27.435794   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:27.441244   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:27.446515   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:27.451968   65177 kubeadm.go:392] StartCluster: {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:27.452053   65177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:27.452102   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.488671   65177 cri.go:89] found id: ""
	I0723 15:20:27.488758   65177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:27.498621   65177 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:27.498639   65177 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:27.498690   65177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:27.510485   65177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:27.511796   65177 kubeconfig.go:125] found "embed-certs-486436" server: "https://192.168.39.200:8443"
	I0723 15:20:27.513749   65177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:27.525206   65177 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I0723 15:20:27.525258   65177 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:27.525275   65177 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:27.525354   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.563337   65177 cri.go:89] found id: ""
	I0723 15:20:27.563411   65177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:27.583886   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:27.595493   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:27.595513   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:27.595591   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:27.606537   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:27.606596   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:27.616130   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:27.624277   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:27.624335   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:27.632787   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.641057   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:27.641113   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.649516   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:27.657977   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:27.658021   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:27.666489   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:27.675023   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.777750   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.982818   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:27.983136   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:27.983157   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:27.983112   67094 retry.go:31] will retry after 1.681215174s: waiting for machine to come up
	I0723 15:20:29.667369   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:29.667816   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:29.667846   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:29.667773   67094 retry.go:31] will retry after 1.742302977s: waiting for machine to come up
	I0723 15:20:31.412567   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:31.413046   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:31.413074   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:31.412990   67094 retry.go:31] will retry after 2.618033682s: waiting for machine to come up
	I0723 15:20:28.659756   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.867793   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.952107   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:29.020498   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:29.020632   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:29.521001   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.021488   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.520765   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.021749   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.521145   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.535745   65177 api_server.go:72] duration metric: took 2.515246955s to wait for apiserver process to appear ...
	I0723 15:20:31.535779   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:20:31.535802   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.561351   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.561400   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:33.561416   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.580699   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.580735   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:34.036231   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.045563   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.045603   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:34.536119   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.549417   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.549447   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:35.035956   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:35.040331   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:20:35.046883   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:20:35.046909   65177 api_server.go:131] duration metric: took 3.511123729s to wait for apiserver health ...
	I0723 15:20:35.046918   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:35.046924   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:35.048858   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:20:34.034295   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:34.034660   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:34.034682   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:34.034634   67094 retry.go:31] will retry after 2.832404848s: waiting for machine to come up
	I0723 15:20:35.050411   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:20:35.061924   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:20:35.088990   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:20:35.102746   65177 system_pods.go:59] 8 kube-system pods found
	I0723 15:20:35.102778   65177 system_pods.go:61] "coredns-7db6d8ff4d-v842j" [f3509de1-edf7-46c4-af5b-89338770d2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:20:35.102786   65177 system_pods.go:61] "etcd-embed-certs-486436" [46b72abd-c16d-452d-8c17-909fd2a25fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:20:35.102796   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [2ce2344f-5ddc-438b-8f16-338bc266da83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:20:35.102804   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [3f483328-583f-4c71-8372-db418f593b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:20:35.102812   65177 system_pods.go:61] "kube-proxy-f4vfh" [00e430df-ccc5-463d-96f9-288e2e611e2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:20:35.102822   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [0c581c3d-78ab-47d8-81a8-9d176192a94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:20:35.102829   65177 system_pods.go:61] "metrics-server-569cc877fc-rq67z" [b6371591-2fac-47f5-b20b-635c9f0755c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:20:35.102840   65177 system_pods.go:61] "storage-provisioner" [a0545674-2bfc-48b4-940e-cdedf02c5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:20:35.102849   65177 system_pods.go:74] duration metric: took 13.834305ms to wait for pod list to return data ...
	I0723 15:20:35.102857   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:20:35.106953   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:20:35.106977   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:20:35.106991   65177 node_conditions.go:105] duration metric: took 4.127613ms to run NodePressure ...
	I0723 15:20:35.107010   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:35.395355   65177 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399496   65177 kubeadm.go:739] kubelet initialised
	I0723 15:20:35.399514   65177 kubeadm.go:740] duration metric: took 4.133847ms waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399521   65177 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:20:35.404293   65177 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.408404   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408423   65177 pod_ready.go:81] duration metric: took 4.111276ms for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.408431   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408440   65177 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.412361   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412379   65177 pod_ready.go:81] duration metric: took 3.929729ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.412391   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412403   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.416588   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416603   65177 pod_ready.go:81] duration metric: took 4.193735ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.416610   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416616   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.492691   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492715   65177 pod_ready.go:81] duration metric: took 76.092496ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.492724   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492731   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892820   65177 pod_ready.go:92] pod "kube-proxy-f4vfh" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:35.892843   65177 pod_ready.go:81] duration metric: took 400.103193ms for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892853   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:37.898159   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:36.869147   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:36.869555   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:36.869593   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:36.869499   67094 retry.go:31] will retry after 4.334096738s: waiting for machine to come up
	I0723 15:20:41.208992   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209340   65605 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:20:41.209364   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:20:41.209382   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209808   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.209843   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | skip adding static IP to network mk-old-k8s-version-000272 - found existing host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"}
	I0723 15:20:41.209862   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:20:41.209878   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:20:41.209916   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:20:41.211671   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.211918   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.211956   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.212110   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:20:41.212139   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:20:41.212191   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:41.212211   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:20:41.212229   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:20:41.334852   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:41.335260   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:20:41.335965   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.338425   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.338803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.338842   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.339024   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:20:41.339218   65605 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:41.339235   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:41.339476   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.341528   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.341881   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.341909   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.342008   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.342192   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342352   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342502   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.342674   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.342855   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.342865   65605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:41.442564   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:41.442592   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.442857   65605 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:20:41.442872   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.443076   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.445976   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446389   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.446429   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446553   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.446719   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.446972   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.447096   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.447249   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.447418   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.447434   65605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:20:41.559708   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:20:41.559739   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.562630   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.562954   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.562977   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.563156   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.563340   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563501   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563596   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.563779   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.563977   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.564006   65605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:41.671327   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:41.671363   65605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:41.671396   65605 buildroot.go:174] setting up certificates
	I0723 15:20:41.671407   65605 provision.go:84] configureAuth start
	I0723 15:20:41.671418   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.671766   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.674340   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.674812   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.674848   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.675019   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.677052   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677386   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.677418   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677568   65605 provision.go:143] copyHostCerts
	I0723 15:20:41.677636   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:41.677651   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:41.677715   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:41.677826   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:41.677836   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:41.677866   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:41.677939   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:41.677949   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:41.677975   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:41.678039   65605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:20:41.745999   65605 provision.go:177] copyRemoteCerts
	I0723 15:20:41.746077   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:41.746123   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.748908   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749226   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.749252   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749417   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.749616   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.749771   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.749903   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:41.828867   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:42.386874   66641 start.go:364] duration metric: took 2m0.299552173s to acquireMachinesLock for "default-k8s-diff-port-911217"
	I0723 15:20:42.386943   66641 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:42.386951   66641 fix.go:54] fixHost starting: 
	I0723 15:20:42.387316   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:42.387356   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:42.405492   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0723 15:20:42.405947   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:42.406493   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:20:42.406517   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:42.406843   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:42.407031   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:20:42.407169   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:20:42.408621   66641 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911217: state=Stopped err=<nil>
	I0723 15:20:42.408657   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	W0723 15:20:42.408798   66641 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:42.410540   66641 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-911217" ...
	I0723 15:20:39.899515   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.903102   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.852296   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:20:41.874579   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:20:41.897065   65605 provision.go:87] duration metric: took 225.644058ms to configureAuth
	I0723 15:20:41.897095   65605 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:41.897287   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:20:41.897354   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.900232   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902335   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.902328   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.902412   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902623   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.902826   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.903015   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.903209   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.903388   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.903407   65605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:42.162998   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:42.163019   65605 machine.go:97] duration metric: took 823.789368ms to provisionDockerMachine
	I0723 15:20:42.163030   65605 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:20:42.163040   65605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:42.163054   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.163444   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:42.163471   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.166193   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.166628   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.166842   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.167037   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.167181   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.248364   65605 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:42.252403   65605 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:42.252433   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:42.252504   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:42.252596   65605 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:42.252693   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:42.262571   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:42.285115   65605 start.go:296] duration metric: took 122.072017ms for postStartSetup
	I0723 15:20:42.285160   65605 fix.go:56] duration metric: took 20.697977265s for fixHost
	I0723 15:20:42.285180   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.287760   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288032   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.288062   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288187   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.288428   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288606   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288799   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.289000   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:42.289216   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:42.289232   65605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:42.386682   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748042.363547028
	
	I0723 15:20:42.386711   65605 fix.go:216] guest clock: 1721748042.363547028
	I0723 15:20:42.386723   65605 fix.go:229] Guest: 2024-07-23 15:20:42.363547028 +0000 UTC Remote: 2024-07-23 15:20:42.285164316 +0000 UTC m=+255.470399434 (delta=78.382712ms)
	I0723 15:20:42.386754   65605 fix.go:200] guest clock delta is within tolerance: 78.382712ms
	I0723 15:20:42.386765   65605 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 20.799620907s
	I0723 15:20:42.386796   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.387067   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:42.390116   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390543   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.390589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390703   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391215   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391395   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391482   65605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:42.391527   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.391645   65605 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:42.391670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.394373   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394732   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.394757   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394924   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395081   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395245   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.395286   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.395331   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.395428   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.395579   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395726   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395963   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.396145   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.499940   65605 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:42.505917   65605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:42.646731   65605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:42.652550   65605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:42.652612   65605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:42.667337   65605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:42.667357   65605 start.go:495] detecting cgroup driver to use...
	I0723 15:20:42.667419   65605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:42.681839   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:42.694833   65605 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:42.694888   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:42.707800   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:42.720914   65605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:42.844082   65605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:43.024993   65605 docker.go:233] disabling docker service ...
	I0723 15:20:43.025076   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:43.057263   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:43.070881   65605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:43.180616   65605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:43.295769   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:43.311341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:43.333719   65605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:20:43.333787   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.345261   65605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:43.345364   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.356669   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.366947   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.378177   65605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:43.390672   65605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:43.400591   65605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:43.400645   65605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:43.413974   65605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:43.423528   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:43.545030   65605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:43.685902   65605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:43.686018   65605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:43.691692   65605 start.go:563] Will wait 60s for crictl version
	I0723 15:20:43.691742   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:43.695470   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:43.733229   65605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:43.733329   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.765591   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.794762   65605 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:20:43.796073   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:43.799075   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799549   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:43.799585   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799780   65605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:43.803604   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:43.818919   65605 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:43.819019   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:20:43.819073   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:43.872208   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:43.872268   65605 ssh_runner.go:195] Run: which lz4
	I0723 15:20:43.876273   65605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:43.880532   65605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:43.880566   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:20:45.299916   65605 crio.go:462] duration metric: took 1.423681931s to copy over tarball
	I0723 15:20:45.299989   65605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:42.411787   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Start
	I0723 15:20:42.411942   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring networks are active...
	I0723 15:20:42.412743   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network default is active
	I0723 15:20:42.413086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network mk-default-k8s-diff-port-911217 is active
	I0723 15:20:42.413500   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Getting domain xml...
	I0723 15:20:42.414312   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Creating domain...
	I0723 15:20:43.688063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting to get IP...
	I0723 15:20:43.689007   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689403   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689503   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.689396   67258 retry.go:31] will retry after 291.635723ms: waiting for machine to come up
	I0723 15:20:43.982895   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983344   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.983269   67258 retry.go:31] will retry after 315.035251ms: waiting for machine to come up
	I0723 15:20:44.300029   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300502   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300544   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.300453   67258 retry.go:31] will retry after 314.08729ms: waiting for machine to come up
	I0723 15:20:44.615873   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616274   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616299   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.616221   67258 retry.go:31] will retry after 424.738509ms: waiting for machine to come up
	I0723 15:20:45.042987   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043464   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043522   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.043438   67258 retry.go:31] will retry after 711.273362ms: waiting for machine to come up
	I0723 15:20:45.755790   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756332   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756366   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.756261   67258 retry.go:31] will retry after 880.333826ms: waiting for machine to come up
	I0723 15:20:46.638270   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638815   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638859   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:46.638766   67258 retry.go:31] will retry after 733.311982ms: waiting for machine to come up
	I0723 15:20:43.398761   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:43.398790   65177 pod_ready.go:81] duration metric: took 7.505930182s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:43.398803   65177 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:45.406572   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:47.406841   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:48.176598   65605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87658172s)
	I0723 15:20:48.176623   65605 crio.go:469] duration metric: took 2.876682557s to extract the tarball
	I0723 15:20:48.176632   65605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:48.221431   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:48.256729   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:48.256750   65605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:20:48.256833   65605 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.256883   65605 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.256906   65605 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.256840   65605 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.256896   65605 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.256841   65605 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.256851   65605 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:20:48.256858   65605 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258836   65605 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.258855   65605 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.258867   65605 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:20:48.258913   65605 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.258840   65605 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258841   65605 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.258842   65605 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.258906   65605 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.548121   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.552098   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.552418   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.560834   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.580417   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:20:48.590031   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.619770   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.633302   65605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:20:48.633365   65605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.633414   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.660305   65605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:20:48.660383   65605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.660439   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.691792   65605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:20:48.691853   65605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.691902   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707832   65605 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:20:48.707867   65605 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:20:48.707901   65605 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:20:48.707917   65605 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.707945   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707957   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.722912   65605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:20:48.722960   65605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.723012   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729754   65605 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:20:48.729792   65605 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.729820   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.729874   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.729826   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729827   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.730025   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:20:48.730037   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.730113   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.848335   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:20:48.849228   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.849310   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:20:48.858540   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:20:48.858650   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:20:48.858711   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:20:48.858750   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:20:48.889577   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:20:49.134808   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:49.273570   65605 cache_images.go:92] duration metric: took 1.016803126s to LoadCachedImages
	W0723 15:20:49.273670   65605 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0723 15:20:49.273686   65605 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:20:49.273808   65605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:49.273902   65605 ssh_runner.go:195] Run: crio config
	I0723 15:20:49.321968   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:20:49.321995   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:49.322007   65605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:49.322028   65605 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:20:49.322208   65605 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:49.322292   65605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:20:49.332563   65605 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:49.332636   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:49.345174   65605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:20:49.364369   65605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:49.379807   65605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:20:49.396643   65605 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:49.400437   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:49.412291   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:49.539360   65605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:49.556165   65605 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:20:49.556198   65605 certs.go:194] generating shared ca certs ...
	I0723 15:20:49.556218   65605 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:49.556393   65605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:49.556448   65605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:49.556457   65605 certs.go:256] generating profile certs ...
	I0723 15:20:49.556574   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:20:49.556652   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:20:49.556699   65605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:20:49.556845   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:49.556900   65605 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:49.556913   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:49.556947   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:49.557001   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:49.557036   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:49.557087   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:49.557993   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:49.605662   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:49.639122   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:49.665264   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:49.691008   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:20:49.723820   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:20:49.750608   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:49.776942   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:49.809923   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:49.834935   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:49.857389   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:49.880619   65605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:49.897369   65605 ssh_runner.go:195] Run: openssl version
	I0723 15:20:49.902878   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:49.913861   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918296   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918359   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.924159   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:49.936081   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:49.947674   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952040   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952090   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.957714   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:49.969333   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:49.981037   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985257   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985303   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.991083   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:50.002977   65605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:50.007497   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:50.013359   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:50.019202   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:50.025182   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:50.030979   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:50.036818   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:50.042573   65605 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:50.042687   65605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:50.042734   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.084635   65605 cri.go:89] found id: ""
	I0723 15:20:50.084714   65605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:50.096501   65605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:50.096521   65605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:50.096585   65605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:50.107443   65605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:50.108742   65605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:20:50.109665   65605 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-000272" cluster setting kubeconfig missing "old-k8s-version-000272" context setting]
	I0723 15:20:50.111089   65605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:50.178975   65605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:50.190920   65605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0723 15:20:50.190961   65605 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:50.190972   65605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:50.191033   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.230879   65605 cri.go:89] found id: ""
	I0723 15:20:50.230972   65605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:50.247994   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:50.257490   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:50.257518   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:50.257576   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:50.266704   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:50.266763   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:50.276276   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:50.285533   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:50.285613   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:50.294642   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.303358   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:50.303414   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.313060   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:50.322294   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:50.322364   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:50.331659   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:50.341120   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:50.460900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.327126   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.576244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.662730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.762087   65605 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:51.762179   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:47.373536   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374064   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374096   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:47.373991   67258 retry.go:31] will retry after 1.176593909s: waiting for machine to come up
	I0723 15:20:48.552701   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553183   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553216   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:48.553135   67258 retry.go:31] will retry after 1.485919187s: waiting for machine to come up
	I0723 15:20:50.040417   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040886   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:50.040808   67258 retry.go:31] will retry after 2.212005186s: waiting for machine to come up
	I0723 15:20:50.444583   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.905273   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.262683   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.763266   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.263151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.763313   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.262366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.763167   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.263068   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.762864   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.262305   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.762857   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.254679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255094   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:52.255018   67258 retry.go:31] will retry after 2.737596804s: waiting for machine to come up
	I0723 15:20:54.995373   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:54.995633   67258 retry.go:31] will retry after 2.363037622s: waiting for machine to come up
	I0723 15:20:55.405124   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:57.405898   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.767191   64842 start.go:364] duration metric: took 55.07978775s to acquireMachinesLock for "no-preload-543029"
	I0723 15:21:01.767250   64842 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:21:01.767261   64842 fix.go:54] fixHost starting: 
	I0723 15:21:01.767727   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:01.767763   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:01.785721   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0723 15:21:01.786113   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:01.786792   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:01.786819   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:01.787127   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:01.787328   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:01.787485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:01.789046   64842 fix.go:112] recreateIfNeeded on no-preload-543029: state=Stopped err=<nil>
	I0723 15:21:01.789080   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	W0723 15:21:01.789255   64842 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:21:01.791610   64842 out.go:177] * Restarting existing kvm2 VM for "no-preload-543029" ...
	I0723 15:20:57.263221   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.262445   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.762456   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.263288   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.763206   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.263158   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.762517   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.263183   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.762347   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.362159   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362567   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362593   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:57.362539   67258 retry.go:31] will retry after 2.888037123s: waiting for machine to come up
	I0723 15:21:00.253973   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.254583   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Found IP for machine: 192.168.61.64
	I0723 15:21:00.254603   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserving static IP address...
	I0723 15:21:00.254630   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has current primary IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.255048   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserved static IP address: 192.168.61.64
	I0723 15:21:00.255074   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for SSH to be available...
	I0723 15:21:00.255105   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.255130   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | skip adding static IP to network mk-default-k8s-diff-port-911217 - found existing host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"}
	I0723 15:21:00.255145   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Getting to WaitForSSH function...
	I0723 15:21:00.257683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258026   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.258054   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258147   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH client type: external
	I0723 15:21:00.258176   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa (-rw-------)
	I0723 15:21:00.258208   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:00.258220   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | About to run SSH command:
	I0723 15:21:00.258240   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | exit 0
	I0723 15:21:00.382323   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:00.382710   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetConfigRaw
	I0723 15:21:00.383397   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.386258   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.386718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386918   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:21:00.387143   66641 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:00.387164   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:00.387412   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.389494   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389798   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.389824   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389917   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.390082   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390237   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390438   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.390628   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.390842   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.390857   66641 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:00.486433   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:00.486468   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486725   66641 buildroot.go:166] provisioning hostname "default-k8s-diff-port-911217"
	I0723 15:21:00.486750   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486948   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.489770   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490120   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.490149   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490273   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.490475   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490671   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490882   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.491062   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.491230   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.491246   66641 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-911217 && echo "default-k8s-diff-port-911217" | sudo tee /etc/hostname
	I0723 15:21:00.603917   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-911217
	
	I0723 15:21:00.603953   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.606538   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.606898   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.606943   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.607069   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.607306   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607525   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607711   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.607920   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.608129   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.608147   66641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-911217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-911217/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-911217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:00.710852   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:00.710887   66641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:00.710915   66641 buildroot.go:174] setting up certificates
	I0723 15:21:00.710928   66641 provision.go:84] configureAuth start
	I0723 15:21:00.710939   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.711205   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.714141   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714519   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.714552   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714765   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.717395   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.717739   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717939   66641 provision.go:143] copyHostCerts
	I0723 15:21:00.718004   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:00.718020   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:00.718115   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:00.718237   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:00.718250   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:00.718284   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:00.718373   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:00.718401   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:00.718431   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:00.718522   66641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-911217 san=[127.0.0.1 192.168.61.64 default-k8s-diff-port-911217 localhost minikube]
	I0723 15:21:01.133831   66641 provision.go:177] copyRemoteCerts
	I0723 15:21:01.133894   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:01.133919   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.136913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.137359   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.137782   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.137944   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.138115   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.217531   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:01.241478   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0723 15:21:01.265056   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:21:01.287281   66641 provision.go:87] duration metric: took 576.341839ms to configureAuth
	I0723 15:21:01.287317   66641 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:01.287496   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:01.287579   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.290157   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290640   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.290668   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290775   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.290978   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.291509   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.291673   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.291688   66641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:01.540756   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:01.540783   66641 machine.go:97] duration metric: took 1.153625976s to provisionDockerMachine
	I0723 15:21:01.540796   66641 start.go:293] postStartSetup for "default-k8s-diff-port-911217" (driver="kvm2")
	I0723 15:21:01.540809   66641 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:01.540827   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.541189   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:01.541225   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.544068   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544486   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.544511   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544600   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.544788   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.544945   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.545154   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.625316   66641 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:01.629446   66641 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:01.629469   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:01.629529   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:01.629634   66641 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:01.629759   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:01.639896   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:01.663515   66641 start.go:296] duration metric: took 122.707128ms for postStartSetup
	I0723 15:21:01.663551   66641 fix.go:56] duration metric: took 19.276599962s for fixHost
	I0723 15:21:01.663569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.666406   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.666830   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.666861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.667086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.667290   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667487   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.667895   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.668100   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.668116   66641 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:01.767011   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748061.738020629
	
	I0723 15:21:01.767035   66641 fix.go:216] guest clock: 1721748061.738020629
	I0723 15:21:01.767043   66641 fix.go:229] Guest: 2024-07-23 15:21:01.738020629 +0000 UTC Remote: 2024-07-23 15:21:01.66355459 +0000 UTC m=+139.710056956 (delta=74.466039ms)
	I0723 15:21:01.767088   66641 fix.go:200] guest clock delta is within tolerance: 74.466039ms
	I0723 15:21:01.767097   66641 start.go:83] releasing machines lock for "default-k8s-diff-port-911217", held for 19.380180818s
	I0723 15:21:01.767122   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.767446   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:01.770143   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770575   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.770607   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770771   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771336   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771513   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771672   66641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:01.771722   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.771767   66641 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:01.771792   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.774913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775261   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775401   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775440   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775651   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.775783   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775835   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775851   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.775933   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.776044   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776119   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.776196   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.776293   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776455   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.887716   66641 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:01.894935   66641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:59.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.906133   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:02.040633   66641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:02.047908   66641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:02.047982   66641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:02.067565   66641 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:02.067589   66641 start.go:495] detecting cgroup driver to use...
	I0723 15:21:02.067648   66641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:02.083334   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:02.096435   66641 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:02.096501   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:02.109497   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:02.122475   66641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:02.238156   66641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:02.413213   66641 docker.go:233] disabling docker service ...
	I0723 15:21:02.413321   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:02.431076   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:02.443590   66641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:02.565848   66641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:02.708530   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:02.724781   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:02.744261   66641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:21:02.744317   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.755864   66641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:02.755939   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.768381   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.779157   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.789500   66641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:02.801063   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.812845   66641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.828742   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.840605   66641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:02.849796   66641 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:02.849866   66641 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:02.862982   66641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:02.874354   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:03.017881   66641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:03.157623   66641 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:03.157699   66641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:03.162343   66641 start.go:563] Will wait 60s for crictl version
	I0723 15:21:03.162429   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:21:03.166092   66641 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:03.203681   66641 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:03.203775   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.230722   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.257801   66641 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:21:01.793112   64842 main.go:141] libmachine: (no-preload-543029) Calling .Start
	I0723 15:21:01.793305   64842 main.go:141] libmachine: (no-preload-543029) Ensuring networks are active...
	I0723 15:21:01.794004   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network default is active
	I0723 15:21:01.794444   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network mk-no-preload-543029 is active
	I0723 15:21:01.794908   64842 main.go:141] libmachine: (no-preload-543029) Getting domain xml...
	I0723 15:21:01.795563   64842 main.go:141] libmachine: (no-preload-543029) Creating domain...
	I0723 15:21:03.126716   64842 main.go:141] libmachine: (no-preload-543029) Waiting to get IP...
	I0723 15:21:03.127667   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.128113   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.128193   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.128095   67435 retry.go:31] will retry after 265.57265ms: waiting for machine to come up
	I0723 15:21:03.395811   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.396355   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.396382   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.396301   67435 retry.go:31] will retry after 304.545362ms: waiting for machine to come up
	I0723 15:21:03.702841   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.703303   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.703332   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.703241   67435 retry.go:31] will retry after 326.35473ms: waiting for machine to come up
	I0723 15:21:04.032032   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.032670   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.032695   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.032568   67435 retry.go:31] will retry after 515.672537ms: waiting for machine to come up
	I0723 15:21:04.550461   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.550989   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.551019   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.550942   67435 retry.go:31] will retry after 735.237546ms: waiting for machine to come up
	I0723 15:21:05.287672   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.288362   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.288393   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.288259   67435 retry.go:31] will retry after 683.55844ms: waiting for machine to come up
	I0723 15:21:02.262289   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.763009   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.262852   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.763260   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.262964   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.762673   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.263335   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.762790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.262830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.762830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.259168   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:03.262241   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:03.262748   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262930   66641 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:03.266969   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:03.278873   66641 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:03.279019   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:21:03.279076   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.318295   66641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:21:03.318390   66641 ssh_runner.go:195] Run: which lz4
	I0723 15:21:03.322441   66641 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:21:03.326818   66641 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:21:03.326857   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:21:04.624581   66641 crio.go:462] duration metric: took 1.302205276s to copy over tarball
	I0723 15:21:04.624665   66641 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:21:06.913370   66641 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.288673981s)
	I0723 15:21:06.913403   66641 crio.go:469] duration metric: took 2.288793517s to extract the tarball
	I0723 15:21:06.913413   66641 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:21:06.951820   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.906766   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:06.405854   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:05.973409   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.973872   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.973920   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.973856   67435 retry.go:31] will retry after 728.120188ms: waiting for machine to come up
	I0723 15:21:06.703125   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:06.703631   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:06.703661   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:06.703554   67435 retry.go:31] will retry after 1.052851436s: waiting for machine to come up
	I0723 15:21:07.758261   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:07.758823   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:07.758853   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:07.758766   67435 retry.go:31] will retry after 1.533027844s: waiting for machine to come up
	I0723 15:21:09.293721   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:09.294204   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:09.294230   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:09.294169   67435 retry.go:31] will retry after 1.399702148s: waiting for machine to come up
	I0723 15:21:07.262935   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.762473   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.262990   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.262850   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.762245   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.263207   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.762516   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.263298   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.762853   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.993755   66641 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:21:06.993783   66641 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:21:06.993793   66641 kubeadm.go:934] updating node { 192.168.61.64 8444 v1.30.3 crio true true} ...
	I0723 15:21:06.993917   66641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-911217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:06.993994   66641 ssh_runner.go:195] Run: crio config
	I0723 15:21:07.040966   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:07.040991   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:07.041014   66641 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:07.041040   66641 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.64 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-911217 NodeName:default-k8s-diff-port-911217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:07.041222   66641 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.64
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-911217"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:07.041284   66641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:21:07.051498   66641 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:07.051567   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:07.060752   66641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0723 15:21:07.078362   66641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:21:07.093890   66641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0723 15:21:07.121632   66641 ssh_runner.go:195] Run: grep 192.168.61.64	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:07.126674   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:07.139521   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:07.264702   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:07.286475   66641 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217 for IP: 192.168.61.64
	I0723 15:21:07.286499   66641 certs.go:194] generating shared ca certs ...
	I0723 15:21:07.286521   66641 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:07.286750   66641 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:07.286814   66641 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:07.286829   66641 certs.go:256] generating profile certs ...
	I0723 15:21:07.286928   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/client.key
	I0723 15:21:07.286986   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key.a1750142
	I0723 15:21:07.287041   66641 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key
	I0723 15:21:07.287151   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:07.287182   66641 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:07.287191   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:07.287210   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:07.287233   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:07.287257   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:07.287288   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:07.288006   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:07.331680   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:07.378132   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:07.423720   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:07.462077   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 15:21:07.489608   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:21:07.511619   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:07.535480   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:07.557870   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:07.579317   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:07.601107   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:07.622717   66641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:07.638728   66641 ssh_runner.go:195] Run: openssl version
	I0723 15:21:07.644065   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:07.654161   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658261   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658335   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.663893   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:07.673883   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:07.684409   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688657   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688710   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.694037   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:07.704621   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:07.714866   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719090   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719133   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.724797   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:07.734660   66641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:07.739005   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:07.744615   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:07.749912   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:07.755350   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:07.760833   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:07.766701   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:07.773611   66641 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:07.773724   66641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:07.773788   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.812612   66641 cri.go:89] found id: ""
	I0723 15:21:07.812689   66641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:07.822628   66641 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:07.822648   66641 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:07.822699   66641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:07.831812   66641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:07.833459   66641 kubeconfig.go:125] found "default-k8s-diff-port-911217" server: "https://192.168.61.64:8444"
	I0723 15:21:07.836425   66641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:07.846945   66641 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.64
	I0723 15:21:07.846976   66641 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:07.846989   66641 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:07.847046   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.881091   66641 cri.go:89] found id: ""
	I0723 15:21:07.881180   66641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:07.900373   66641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:07.912010   66641 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:07.912035   66641 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:07.912092   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0723 15:21:07.920903   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:07.920981   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:07.930186   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0723 15:21:07.938825   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:07.938891   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:07.947852   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.957007   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:07.957076   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.966642   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0723 15:21:07.975395   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:07.975457   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:07.984363   66641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:07.993997   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:08.112135   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.260639   66641 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1484675s)
	I0723 15:21:09.260677   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.481542   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.546998   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.657302   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:09.657407   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.157632   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.658193   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.694922   66641 api_server.go:72] duration metric: took 1.037619978s to wait for apiserver process to appear ...
	I0723 15:21:10.694957   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:10.694980   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:08.406647   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:10.907117   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:13.783814   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.783855   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:13.783874   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:13.828920   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.828952   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:14.195191   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.199330   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.199350   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:14.695758   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.703433   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.703471   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:15.196096   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:15.200578   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:21:15.208499   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:21:15.208523   66641 api_server.go:131] duration metric: took 4.513559684s to wait for apiserver health ...
	I0723 15:21:15.208532   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:15.208539   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:15.210371   66641 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:10.696028   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:10.696532   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:10.696556   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:10.696480   67435 retry.go:31] will retry after 1.754927597s: waiting for machine to come up
	I0723 15:21:12.452705   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:12.453135   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:12.453164   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:12.453082   67435 retry.go:31] will retry after 2.354607493s: waiting for machine to come up
	I0723 15:21:14.809924   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:14.810438   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:14.810467   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:14.810400   67435 retry.go:31] will retry after 4.422072307s: waiting for machine to come up
	I0723 15:21:12.262754   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.762339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.262358   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.762291   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.262339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.762796   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.263008   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.762225   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.263100   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.762356   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.211787   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:15.226475   66641 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:15.245284   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:15.253756   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:15.253789   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:15.253798   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:15.253805   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:15.253815   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:15.253822   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:15.253828   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:15.253833   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:15.253838   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:15.253844   66641 system_pods.go:74] duration metric: took 8.537438ms to wait for pod list to return data ...
	I0723 15:21:15.253853   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:15.258127   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:15.258153   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:15.258163   66641 node_conditions.go:105] duration metric: took 4.305171ms to run NodePressure ...
	I0723 15:21:15.258177   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:15.533298   66641 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541967   66641 kubeadm.go:739] kubelet initialised
	I0723 15:21:15.541987   66641 kubeadm.go:740] duration metric: took 8.645977ms waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541995   66641 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:15.549557   66641 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.553971   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554002   66641 pod_ready.go:81] duration metric: took 4.418498ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.554013   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554022   66641 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.558017   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558040   66641 pod_ready.go:81] duration metric: took 4.009013ms for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.558050   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558058   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.562197   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562219   66641 pod_ready.go:81] duration metric: took 4.154836ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.562228   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562234   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.649441   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649466   66641 pod_ready.go:81] duration metric: took 87.224782ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.649477   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649484   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.049016   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049052   66641 pod_ready.go:81] duration metric: took 399.56194ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.049063   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049071   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.449193   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449221   66641 pod_ready.go:81] duration metric: took 400.140989ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.449231   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449239   66641 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.849035   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849069   66641 pod_ready.go:81] duration metric: took 399.822211ms for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.849080   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849087   66641 pod_ready.go:38] duration metric: took 1.307085242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:16.849102   66641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:16.860322   66641 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:16.860344   66641 kubeadm.go:597] duration metric: took 9.037689802s to restartPrimaryControlPlane
	I0723 15:21:16.860353   66641 kubeadm.go:394] duration metric: took 9.086749188s to StartCluster
	I0723 15:21:16.860368   66641 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.860445   66641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:16.862706   66641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.863010   66641 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:16.863105   66641 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:16.863162   66641 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863183   66641 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863194   66641 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863201   66641 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:16.863202   66641 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863218   66641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-911217"
	I0723 15:21:16.863225   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863235   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:16.863261   66641 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863272   66641 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:16.863304   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863517   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863547   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863553   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863566   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863584   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863612   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.864773   66641 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:16.866155   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:16.879697   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0723 15:21:16.880186   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.880765   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.880786   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.881122   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.881681   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.881712   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.882675   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0723 15:21:16.883162   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.883709   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.883730   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.883748   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0723 15:21:16.884082   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.884138   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.884609   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.884639   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.884610   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.884699   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.885040   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.885254   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.888611   66641 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.888627   66641 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:16.888651   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.888916   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.888944   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.899013   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0723 15:21:16.899458   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.900188   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.900208   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.900593   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.900786   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.902589   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0723 15:21:16.903091   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.903189   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.904095   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.904118   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.904576   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.904810   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.905242   66641 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:16.905443   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0723 15:21:16.905849   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.906358   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.906375   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.906491   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:16.906512   66641 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:16.906533   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.906766   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.906920   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.907374   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.907409   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.909637   66641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:16.910635   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911126   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.911154   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.911534   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.911683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.911859   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.913408   66641 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:16.913435   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:16.913456   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.916884   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.917338   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917647   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.917896   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.918061   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.918207   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.930880   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0723 15:21:16.931386   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.931925   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.931951   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.932292   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.932495   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.934404   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.934645   66641 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:16.934659   66641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:16.934675   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.937624   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.937991   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.938013   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.938166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.938342   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.938523   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.938695   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:13.407459   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:15.906352   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:17.068411   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:17.084266   66641 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:17.189089   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:17.189118   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:17.205584   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:17.205623   66641 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:17.209103   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:17.224264   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:17.245125   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:17.245152   66641 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:17.272564   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:18.245078   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020778604s)
	I0723 15:21:18.245165   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036025141s)
	I0723 15:21:18.245186   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245195   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245209   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245213   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245201   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245513   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245526   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245543   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245550   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245633   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245648   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245657   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245665   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245682   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245695   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245703   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245723   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245842   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245872   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245903   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245911   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245928   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245932   66641 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-911217"
	I0723 15:21:18.245982   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245987   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.246004   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.251643   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.251660   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.251879   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.251889   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.253737   66641 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:19.235665   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236110   64842 main.go:141] libmachine: (no-preload-543029) Found IP for machine: 192.168.72.227
	I0723 15:21:19.236141   64842 main.go:141] libmachine: (no-preload-543029) Reserving static IP address...
	I0723 15:21:19.236154   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has current primary IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236541   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.236571   64842 main.go:141] libmachine: (no-preload-543029) DBG | skip adding static IP to network mk-no-preload-543029 - found existing host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"}
	I0723 15:21:19.236586   64842 main.go:141] libmachine: (no-preload-543029) Reserved static IP address: 192.168.72.227
	I0723 15:21:19.236601   64842 main.go:141] libmachine: (no-preload-543029) Waiting for SSH to be available...
	I0723 15:21:19.236613   64842 main.go:141] libmachine: (no-preload-543029) DBG | Getting to WaitForSSH function...
	I0723 15:21:19.239149   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239453   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.239481   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239620   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH client type: external
	I0723 15:21:19.239651   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa (-rw-------)
	I0723 15:21:19.239677   64842 main.go:141] libmachine: (no-preload-543029) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:19.239691   64842 main.go:141] libmachine: (no-preload-543029) DBG | About to run SSH command:
	I0723 15:21:19.239700   64842 main.go:141] libmachine: (no-preload-543029) DBG | exit 0
	I0723 15:21:19.366227   64842 main.go:141] libmachine: (no-preload-543029) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:19.366646   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetConfigRaw
	I0723 15:21:19.367309   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.370038   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370401   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.370430   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370756   64842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/config.json ...
	I0723 15:21:19.370949   64842 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:19.370966   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:19.371186   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.373506   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.373912   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.373977   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.374053   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.374259   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374465   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374635   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.374805   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.374996   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.375009   64842 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:19.482523   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:19.482551   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482771   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:21:19.482796   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482975   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.485520   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.485868   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.485898   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.486084   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.486300   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486483   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486634   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.486777   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.486998   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.487019   64842 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-543029 && echo "no-preload-543029" | sudo tee /etc/hostname
	I0723 15:21:19.609064   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-543029
	
	I0723 15:21:19.609100   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.611746   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612087   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.612133   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612276   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.612477   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612663   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612845   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.612979   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.613158   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.613180   64842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-543029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-543029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-543029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:19.731696   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:19.731721   64842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:19.731740   64842 buildroot.go:174] setting up certificates
	I0723 15:21:19.731748   64842 provision.go:84] configureAuth start
	I0723 15:21:19.731755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.732051   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.735016   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.735425   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735608   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.737908   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738267   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.738317   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738482   64842 provision.go:143] copyHostCerts
	I0723 15:21:19.738556   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:19.738571   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:19.738641   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:19.738746   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:19.738755   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:19.738779   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:19.738852   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:19.738866   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:19.738887   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:19.738965   64842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.no-preload-543029 san=[127.0.0.1 192.168.72.227 localhost minikube no-preload-543029]
	I0723 15:21:20.020845   64842 provision.go:177] copyRemoteCerts
	I0723 15:21:20.020921   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:20.020954   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.023907   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024341   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.024363   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024531   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.024799   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.024973   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.025138   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.113238   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:20.136690   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 15:21:20.161178   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:21:20.184741   64842 provision.go:87] duration metric: took 452.982716ms to configureAuth
	I0723 15:21:20.184767   64842 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:20.184992   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:20.185076   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.187893   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188209   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.188235   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188473   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.188684   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.188883   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.189026   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.189181   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.189379   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.189397   64842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:17.263163   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.762332   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.263184   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.762413   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.263050   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.762396   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.263052   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.763027   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.263244   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.762584   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.255042   66641 addons.go:510] duration metric: took 1.391938603s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:19.089229   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:21.587960   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:20.463609   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:20.463657   64842 machine.go:97] duration metric: took 1.092694849s to provisionDockerMachine
	I0723 15:21:20.463670   64842 start.go:293] postStartSetup for "no-preload-543029" (driver="kvm2")
	I0723 15:21:20.463684   64842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:20.463705   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.464063   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:20.464093   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.467027   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.467429   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467606   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.467785   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.467938   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.468096   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.556442   64842 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:20.561477   64842 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:20.561506   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:20.561590   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:20.561694   64842 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:20.561814   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:20.574431   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:20.603531   64842 start.go:296] duration metric: took 139.847057ms for postStartSetup
	I0723 15:21:20.603578   64842 fix.go:56] duration metric: took 18.836315993s for fixHost
	I0723 15:21:20.603644   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.606820   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607184   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.607230   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607410   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.607660   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607851   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607999   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.608191   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.608373   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.608383   64842 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:20.718722   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748080.694505305
	
	I0723 15:21:20.718755   64842 fix.go:216] guest clock: 1721748080.694505305
	I0723 15:21:20.718764   64842 fix.go:229] Guest: 2024-07-23 15:21:20.694505305 +0000 UTC Remote: 2024-07-23 15:21:20.603582679 +0000 UTC m=+365.240688683 (delta=90.922626ms)
	I0723 15:21:20.718796   64842 fix.go:200] guest clock delta is within tolerance: 90.922626ms
	I0723 15:21:20.718801   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 18.9515773s
	I0723 15:21:20.718818   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.719088   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:20.721851   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722269   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.722292   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722527   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723046   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723231   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723328   64842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:20.723377   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.723460   64842 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:20.723485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.726596   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.726987   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727022   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727041   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727142   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727329   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.727475   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727498   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.727510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727638   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727707   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.728003   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.728170   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.728341   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.841462   64842 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:20.847787   64842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:21:20.998310   64842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:21.004048   64842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:21.004125   64842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:21.019676   64842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:21.019699   64842 start.go:495] detecting cgroup driver to use...
	I0723 15:21:21.019773   64842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:21.034888   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:21.049886   64842 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:21.049949   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:21.063974   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:21.077306   64842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:21.195936   64842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:21.355002   64842 docker.go:233] disabling docker service ...
	I0723 15:21:21.355090   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:21.370421   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:21.382910   64842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:21.493040   64842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:21.610670   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:21.623845   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:21.641461   64842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:21:21.641518   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.651025   64842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:21.651096   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.661449   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.671431   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.681681   64842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:21.692696   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.702592   64842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.720041   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.730075   64842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:21.739621   64842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:21.739686   64842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:21.752036   64842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:21.761412   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:21.902842   64842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:22.032458   64842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:22.032545   64842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:22.037229   64842 start.go:563] Will wait 60s for crictl version
	I0723 15:21:22.037309   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.040918   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:22.081102   64842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:22.081203   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.111862   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.140842   64842 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:21:18.404301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:20.406322   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.406365   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.142110   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:22.144996   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145342   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:22.145382   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145651   64842 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:22.149630   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:22.161308   64842 kubeadm.go:883] updating cluster {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:22.161457   64842 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:21:22.161507   64842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:22.196099   64842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0723 15:21:22.196122   64842 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:21:22.196180   64842 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.196197   64842 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.196257   64842 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0723 15:21:22.196270   64842 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.196280   64842 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.196391   64842 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.196430   64842 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.196256   64842 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.197600   64842 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197611   64842 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.197612   64842 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.197603   64842 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.197632   64842 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.197855   64842 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0723 15:21:22.453013   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.456128   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.457426   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.457660   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.468840   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.488855   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0723 15:21:22.498800   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.521182   64842 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0723 15:21:22.521236   64842 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.521282   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.606761   64842 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0723 15:21:22.606814   64842 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.606863   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626104   64842 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0723 15:21:22.626139   64842 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0723 15:21:22.626148   64842 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.626171   64842 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626405   64842 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0723 15:21:22.626436   64842 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.626497   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.739834   64842 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0723 15:21:22.739888   64842 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.739923   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.739972   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.739931   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.740025   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.740028   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.740087   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.754758   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.903466   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0723 15:21:22.903526   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903582   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.903618   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903475   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903669   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903725   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903738   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903808   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0723 15:21:22.903870   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:22.903977   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.904112   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.916856   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0723 15:21:22.916880   64842 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.916927   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.917993   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918778   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918818   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918846   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0723 15:21:22.918919   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0723 15:21:23.126109   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916361   64842 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.790200633s)
	I0723 15:21:24.916416   64842 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0723 15:21:24.916450   64842 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916477   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.999519999s)
	I0723 15:21:24.916501   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:24.916502   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0723 15:21:24.916528   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.916570   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.921489   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.262373   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.762746   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.263229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.763195   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.262446   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.762506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.262490   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.263073   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.762900   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.087763   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:24.588088   66641 node_ready.go:49] node "default-k8s-diff-port-911217" has status "Ready":"True"
	I0723 15:21:24.588115   66641 node_ready.go:38] duration metric: took 7.503814941s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:24.588126   66641 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:24.593658   66641 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598755   66641 pod_ready.go:92] pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:24.598780   66641 pod_ready.go:81] duration metric: took 5.095349ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598792   66641 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:26.605401   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:24.906330   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:26.906460   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:27.393601   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.477002958s)
	I0723 15:21:27.393621   64842 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.472105782s)
	I0723 15:21:27.393640   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0723 15:21:27.393664   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393665   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 15:21:27.393707   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393763   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:29.040178   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.646445558s)
	I0723 15:21:29.040216   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0723 15:21:29.040222   64842 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.64643284s)
	I0723 15:21:29.040248   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0723 15:21:29.040252   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:29.040316   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:27.262530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.762666   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.262506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.762908   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.262943   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.763041   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.263200   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.762855   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.262991   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.605685   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:29.107082   66641 pod_ready.go:92] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.107106   66641 pod_ready.go:81] duration metric: took 4.508306433s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.107117   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112506   66641 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.112529   66641 pod_ready.go:81] duration metric: took 5.405596ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112564   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117710   66641 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.117736   66641 pod_ready.go:81] duration metric: took 5.161856ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117748   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122182   66641 pod_ready.go:92] pod "kube-proxy-d4zwd" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.122207   66641 pod_ready.go:81] duration metric: took 4.450531ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122218   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126407   66641 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.126428   66641 pod_ready.go:81] duration metric: took 4.201792ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126439   66641 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:31.133392   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:28.967873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.404672   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.100302   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.059957757s)
	I0723 15:21:31.100343   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0723 15:21:31.100373   64842 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:31.100425   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:34.291526   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.191073801s)
	I0723 15:21:34.291561   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0723 15:21:34.291588   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:34.291639   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:32.262345   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.762530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.262472   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.763055   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.262344   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.762962   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.262594   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.762498   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.263210   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.763229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.631906   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.632672   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:33.405404   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.906326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.650341   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358679252s)
	I0723 15:21:35.650368   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0723 15:21:35.650412   64842 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:35.650450   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:36.307948   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0723 15:21:36.307992   64842 cache_images.go:123] Successfully loaded all cached images
	I0723 15:21:36.307999   64842 cache_images.go:92] duration metric: took 14.11186471s to LoadCachedImages
	I0723 15:21:36.308012   64842 kubeadm.go:934] updating node { 192.168.72.227 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:21:36.308139   64842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-543029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:36.308223   64842 ssh_runner.go:195] Run: crio config
	I0723 15:21:36.353489   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:36.353510   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:36.353521   64842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:36.353549   64842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-543029 NodeName:no-preload-543029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:36.353706   64842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-543029"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:36.353774   64842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:21:36.363814   64842 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:36.363887   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:36.372484   64842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0723 15:21:36.388450   64842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:21:36.404404   64842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0723 15:21:36.420801   64842 ssh_runner.go:195] Run: grep 192.168.72.227	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:36.424596   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:36.436558   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:36.563903   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:36.580045   64842 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029 for IP: 192.168.72.227
	I0723 15:21:36.580108   64842 certs.go:194] generating shared ca certs ...
	I0723 15:21:36.580133   64842 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:36.580339   64842 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:36.580409   64842 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:36.580423   64842 certs.go:256] generating profile certs ...
	I0723 15:21:36.580538   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.key
	I0723 15:21:36.580633   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key.1fcf66d2
	I0723 15:21:36.580678   64842 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key
	I0723 15:21:36.580818   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:36.580856   64842 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:36.580866   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:36.580899   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:36.580934   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:36.580968   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:36.581017   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:36.581890   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:36.617903   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:36.650101   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:36.690040   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:36.716216   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 15:21:36.740583   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:21:36.764801   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:36.798418   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:36.821594   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:36.843862   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:36.866577   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:36.888178   64842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:36.903980   64842 ssh_runner.go:195] Run: openssl version
	I0723 15:21:36.910344   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:36.920792   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925317   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925372   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.931375   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:36.941782   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:36.952943   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957594   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957643   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.963465   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:36.974471   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:36.984631   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989126   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989180   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.994580   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:37.004372   64842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:37.009492   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:37.016189   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:37.023648   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:37.030369   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:37.036358   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:37.042504   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:37.048396   64842 kubeadm.go:392] StartCluster: {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:37.048473   64842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:37.048542   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.085642   64842 cri.go:89] found id: ""
	I0723 15:21:37.085711   64842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:37.095789   64842 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:37.095809   64842 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:37.095861   64842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:37.105817   64842 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:37.106841   64842 kubeconfig.go:125] found "no-preload-543029" server: "https://192.168.72.227:8443"
	I0723 15:21:37.109115   64842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:37.118333   64842 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.227
	I0723 15:21:37.118365   64842 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:37.118389   64842 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:37.118442   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.160393   64842 cri.go:89] found id: ""
	I0723 15:21:37.160465   64842 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:37.175866   64842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:37.184719   64842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:37.184737   64842 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:37.184796   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:21:37.192836   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:37.192893   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:37.201472   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:21:37.209448   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:37.209509   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:37.217692   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.225746   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:37.225792   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.234312   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:21:37.242796   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:37.242853   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:37.251655   64842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:37.260393   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:37.372906   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.228191   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.438949   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.503088   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.588692   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:38.588787   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.089205   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.589266   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.609653   64842 api_server.go:72] duration metric: took 1.020961559s to wait for apiserver process to appear ...
	I0723 15:21:39.609681   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:39.609703   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:39.610233   64842 api_server.go:269] stopped: https://192.168.72.227:8443/healthz: Get "https://192.168.72.227:8443/healthz": dial tcp 192.168.72.227:8443: connect: connection refused
	I0723 15:21:40.110036   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:37.263268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.763001   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.263263   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.762567   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.262510   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.762366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.263091   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.762546   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.263115   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.762511   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.133459   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.634011   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:38.405042   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.405301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.406499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.755036   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.755081   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:42.755102   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:42.774722   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.774753   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:43.110105   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.114521   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.114549   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:43.610681   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.619976   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.620012   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:44.110574   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:44.117164   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:21:44.125459   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:21:44.125487   64842 api_server.go:131] duration metric: took 4.515798224s to wait for apiserver health ...
	I0723 15:21:44.125500   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:44.125508   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:44.127031   64842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:44.128250   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:44.156441   64842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:44.190002   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:44.202487   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:44.202543   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:44.202558   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:44.202570   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:44.202580   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:44.202597   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:44.202611   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:44.202623   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:44.202635   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:44.202649   64842 system_pods.go:74] duration metric: took 12.618106ms to wait for pod list to return data ...
	I0723 15:21:44.202663   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:44.208561   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:44.208598   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:44.208613   64842 node_conditions.go:105] duration metric: took 5.939597ms to run NodePressure ...
	I0723 15:21:44.208637   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:44.527115   64842 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531381   64842 kubeadm.go:739] kubelet initialised
	I0723 15:21:44.531403   64842 kubeadm.go:740] duration metric: took 4.261609ms waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531410   64842 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:44.536741   64842 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.542345   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542367   64842 pod_ready.go:81] duration metric: took 5.603228ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.542376   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542409   64842 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.547170   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547202   64842 pod_ready.go:81] duration metric: took 4.783034ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.547214   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547223   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.552220   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552239   64842 pod_ready.go:81] duration metric: took 5.010275ms for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.552247   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552252   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.593233   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593263   64842 pod_ready.go:81] duration metric: took 41.002989ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.593275   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593284   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.993527   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993556   64842 pod_ready.go:81] duration metric: took 400.24962ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.993567   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993575   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.393187   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393215   64842 pod_ready.go:81] duration metric: took 399.632229ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.393224   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393230   64842 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.794005   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794039   64842 pod_ready.go:81] duration metric: took 400.798877ms for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.794050   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794061   64842 pod_ready.go:38] duration metric: took 1.262643249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:45.794082   64842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:45.806575   64842 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:45.806604   64842 kubeadm.go:597] duration metric: took 8.710787698s to restartPrimaryControlPlane
	I0723 15:21:45.806616   64842 kubeadm.go:394] duration metric: took 8.758224212s to StartCluster
	I0723 15:21:45.806636   64842 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.806714   64842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:45.808707   64842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.808950   64842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:45.809024   64842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:45.809108   64842 addons.go:69] Setting storage-provisioner=true in profile "no-preload-543029"
	I0723 15:21:45.809121   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:45.809144   64842 addons.go:234] Setting addon storage-provisioner=true in "no-preload-543029"
	I0723 15:21:45.809148   64842 addons.go:69] Setting default-storageclass=true in profile "no-preload-543029"
	I0723 15:21:45.809158   64842 addons.go:69] Setting metrics-server=true in profile "no-preload-543029"
	I0723 15:21:45.809186   64842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-543029"
	I0723 15:21:45.809198   64842 addons.go:234] Setting addon metrics-server=true in "no-preload-543029"
	W0723 15:21:45.809207   64842 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:45.809233   64842 host.go:66] Checking if "no-preload-543029" exists ...
	W0723 15:21:45.809156   64842 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:45.809298   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.809533   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809566   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809615   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809650   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809666   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809694   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.810889   64842 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:45.812166   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:45.825877   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0723 15:21:45.826459   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.826873   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0723 15:21:45.827091   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827122   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.827302   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.827520   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.827785   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827809   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.828045   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.828076   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.828197   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.828404   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.828464   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0723 15:21:45.829160   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.829594   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.829617   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.830024   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.830679   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.830726   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.832633   64842 addons.go:234] Setting addon default-storageclass=true in "no-preload-543029"
	W0723 15:21:45.832654   64842 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:45.832683   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.833024   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.833067   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.848944   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0723 15:21:45.849974   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.850455   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0723 15:21:45.850916   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.850938   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.851135   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.851254   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.851443   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.852354   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.852373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.852472   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0723 15:21:45.852797   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.853534   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.853613   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.853820   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.854337   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.854373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.854866   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.855572   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.855606   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.855642   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.855829   64842 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:45.857645   64842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:45.857658   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:45.857676   64842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:45.857695   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:42.262868   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.762469   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.262898   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.762342   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.262359   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.763149   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.263062   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.763109   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.262592   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.763170   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.132245   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.633648   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.859112   64842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:45.859130   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:45.859146   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.861510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862069   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.862099   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862362   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.862596   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.862842   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.863077   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.863162   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.864192   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.864223   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.864257   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.864446   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.864602   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.864750   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.901172   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0723 15:21:45.901604   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.902073   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.902096   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.902455   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.902711   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.904749   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.905713   64842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:45.905736   64842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:45.905755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.909130   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909598   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.909655   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909882   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.910025   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.910171   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.910413   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:46.014049   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:46.040760   64842 node_ready.go:35] waiting up to 6m0s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:46.115180   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:46.144610   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:46.144632   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:46.164354   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:46.181905   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:46.181929   64842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:46.241734   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:46.241764   64842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:46.267086   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:47.396441   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.281225615s)
	I0723 15:21:47.396460   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232072139s)
	I0723 15:21:47.396498   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396512   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396529   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396544   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129426841s)
	I0723 15:21:47.396591   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396611   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396879   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396894   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396904   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396912   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396927   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396948   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396958   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396973   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.397067   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397093   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397113   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397120   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397310   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397326   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397335   64842 addons.go:475] Verifying addon metrics-server=true in "no-preload-543029"
	I0723 15:21:47.398473   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398488   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.398497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.398504   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.398766   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398788   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.398805   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.420728   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.420747   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.421047   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.421067   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.423038   64842 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:44.409201   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:46.905099   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:47.424285   64842 addons.go:510] duration metric: took 1.615264126s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:48.044800   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:47.262743   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.762500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.262636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.762397   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.262912   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.763274   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.262631   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.762560   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.262984   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.763131   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:51.763218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:51.804139   65605 cri.go:89] found id: ""
	I0723 15:21:51.804167   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.804177   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:51.804185   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:51.804246   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:51.846025   65605 cri.go:89] found id: ""
	I0723 15:21:51.846052   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.846064   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:51.846070   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:51.846133   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:48.132371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.133097   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:49.405318   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.907543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.545198   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:53.045065   64842 node_ready.go:49] node "no-preload-543029" has status "Ready":"True"
	I0723 15:21:53.045092   64842 node_ready.go:38] duration metric: took 7.004300565s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:53.045103   64842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:53.051631   64842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056333   64842 pod_ready.go:92] pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.056391   64842 pod_ready.go:81] duration metric: took 4.723453ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056428   64842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061634   64842 pod_ready.go:92] pod "etcd-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.061654   64842 pod_ready.go:81] duration metric: took 5.217288ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061666   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:55.068882   64842 pod_ready.go:102] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.885398   65605 cri.go:89] found id: ""
	I0723 15:21:51.885431   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.885442   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:51.885450   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:51.885514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:51.919587   65605 cri.go:89] found id: ""
	I0723 15:21:51.919618   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.919630   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:51.919637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:51.919723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:51.955301   65605 cri.go:89] found id: ""
	I0723 15:21:51.955335   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.955342   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:51.955348   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:51.955397   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:51.988318   65605 cri.go:89] found id: ""
	I0723 15:21:51.988345   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.988355   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:51.988362   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:51.988419   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:52.023375   65605 cri.go:89] found id: ""
	I0723 15:21:52.023407   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.023418   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:52.023426   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:52.023498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:52.060183   65605 cri.go:89] found id: ""
	I0723 15:21:52.060205   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.060212   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:52.060221   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:52.060233   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:52.109904   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:52.109937   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:52.123292   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:52.123317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:52.253361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:52.253386   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:52.253401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:52.321684   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:52.321720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:54.859846   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:54.873167   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:54.873233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:54.909330   65605 cri.go:89] found id: ""
	I0723 15:21:54.909351   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.909359   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:54.909364   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:54.909412   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:54.943092   65605 cri.go:89] found id: ""
	I0723 15:21:54.943120   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.943131   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:54.943138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:54.943198   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:54.975051   65605 cri.go:89] found id: ""
	I0723 15:21:54.975080   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.975090   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:54.975098   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:54.975172   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:55.017552   65605 cri.go:89] found id: ""
	I0723 15:21:55.017580   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.017590   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:55.017596   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:55.017657   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:55.067857   65605 cri.go:89] found id: ""
	I0723 15:21:55.067887   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.067897   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:55.067903   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:55.067965   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:55.105194   65605 cri.go:89] found id: ""
	I0723 15:21:55.105224   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.105234   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:55.105242   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:55.105312   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:55.174421   65605 cri.go:89] found id: ""
	I0723 15:21:55.174451   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.174463   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:55.174470   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:55.174521   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:55.209007   65605 cri.go:89] found id: ""
	I0723 15:21:55.209032   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.209039   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:55.209048   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:55.209059   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:55.261075   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:55.261110   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:55.273629   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:55.273656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:55.348214   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:55.348237   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:55.348271   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:55.418341   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:55.418371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:52.134201   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.633089   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.405215   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.405377   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.068263   64842 pod_ready.go:92] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.068285   64842 pod_ready.go:81] duration metric: took 3.006610636s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.068294   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073245   64842 pod_ready.go:92] pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.073267   64842 pod_ready.go:81] duration metric: took 4.962522ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073275   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078816   64842 pod_ready.go:92] pod "kube-proxy-wzbps" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.078835   64842 pod_ready.go:81] duration metric: took 5.554703ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078843   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646678   64842 pod_ready.go:92] pod "kube-scheduler-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.646709   64842 pod_ready.go:81] duration metric: took 567.858812ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646722   64842 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:58.653962   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:57.956565   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:57.969980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:57.970054   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:58.002894   65605 cri.go:89] found id: ""
	I0723 15:21:58.002925   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.002943   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:58.002951   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:58.003018   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:58.034980   65605 cri.go:89] found id: ""
	I0723 15:21:58.035007   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.035017   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:58.035024   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:58.035090   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:58.068666   65605 cri.go:89] found id: ""
	I0723 15:21:58.068694   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.068702   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:58.068708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:58.068757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:58.102693   65605 cri.go:89] found id: ""
	I0723 15:21:58.102727   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.102737   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:58.102744   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:58.102807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:58.137492   65605 cri.go:89] found id: ""
	I0723 15:21:58.137521   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.137530   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:58.137535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:58.137590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:58.173616   65605 cri.go:89] found id: ""
	I0723 15:21:58.173640   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.173647   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:58.173654   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:58.173716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:58.206995   65605 cri.go:89] found id: ""
	I0723 15:21:58.207023   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.207033   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:58.207040   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:58.207100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:58.238476   65605 cri.go:89] found id: ""
	I0723 15:21:58.238504   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.238513   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:58.238525   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:58.238538   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:58.291074   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:58.291104   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:58.305305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:58.305349   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:58.379551   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:58.379572   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:58.379587   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:58.453253   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:58.453293   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:00.994715   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:01.010264   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:01.010359   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:01.065402   65605 cri.go:89] found id: ""
	I0723 15:22:01.065433   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.065443   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:01.065451   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:01.065511   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:01.115626   65605 cri.go:89] found id: ""
	I0723 15:22:01.115655   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.115666   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:01.115675   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:01.115737   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:01.155568   65605 cri.go:89] found id: ""
	I0723 15:22:01.155595   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.155604   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:01.155610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:01.155674   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:01.191076   65605 cri.go:89] found id: ""
	I0723 15:22:01.191102   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.191110   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:01.191116   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:01.191162   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:01.224233   65605 cri.go:89] found id: ""
	I0723 15:22:01.224257   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.224263   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:01.224269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:01.224337   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:01.257321   65605 cri.go:89] found id: ""
	I0723 15:22:01.257344   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.257351   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:01.257357   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:01.257415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:01.289646   65605 cri.go:89] found id: ""
	I0723 15:22:01.289670   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.289678   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:01.289685   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:01.289740   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:01.322672   65605 cri.go:89] found id: ""
	I0723 15:22:01.322703   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.322714   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:01.322725   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:01.322741   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:01.395637   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:01.395674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:01.434548   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:01.434580   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:01.484364   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:01.484396   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:01.497536   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:01.497571   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:01.567570   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:57.132119   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:59.132178   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.134156   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:58.407847   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:00.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.161116   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.658640   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:04.068561   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:04.082660   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:04.082738   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:04.118536   65605 cri.go:89] found id: ""
	I0723 15:22:04.118566   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.118576   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:04.118584   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:04.118642   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:04.154768   65605 cri.go:89] found id: ""
	I0723 15:22:04.154792   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.154802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:04.154809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:04.154854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:04.188426   65605 cri.go:89] found id: ""
	I0723 15:22:04.188456   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.188464   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:04.188469   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:04.188517   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:04.222195   65605 cri.go:89] found id: ""
	I0723 15:22:04.222221   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.222229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:04.222251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:04.222327   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:04.259164   65605 cri.go:89] found id: ""
	I0723 15:22:04.259191   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.259201   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:04.259208   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:04.259275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:04.291500   65605 cri.go:89] found id: ""
	I0723 15:22:04.291527   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.291534   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:04.291541   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:04.291595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:04.326680   65605 cri.go:89] found id: ""
	I0723 15:22:04.326712   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.326722   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:04.326729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:04.326789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:04.358629   65605 cri.go:89] found id: ""
	I0723 15:22:04.358653   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.358662   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:04.358671   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:04.358682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:04.429591   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.429614   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:04.429625   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:04.509841   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:04.509887   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:04.547827   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:04.547852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:04.600857   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:04.600891   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:03.633501   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.633691   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.404413   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.404840   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.405499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:06.153755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:08.653890   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.116541   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:07.129739   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:07.129809   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:07.164541   65605 cri.go:89] found id: ""
	I0723 15:22:07.164573   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.164583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:07.164589   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:07.164651   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:07.202567   65605 cri.go:89] found id: ""
	I0723 15:22:07.202595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.202606   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:07.202613   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:07.202672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:07.238665   65605 cri.go:89] found id: ""
	I0723 15:22:07.238689   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.238698   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:07.238706   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:07.238763   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:07.271216   65605 cri.go:89] found id: ""
	I0723 15:22:07.271246   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.271256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:07.271263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:07.271335   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:07.303566   65605 cri.go:89] found id: ""
	I0723 15:22:07.303595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.303606   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:07.303613   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:07.303672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:07.337927   65605 cri.go:89] found id: ""
	I0723 15:22:07.337951   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.337959   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:07.337965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:07.338023   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:07.373813   65605 cri.go:89] found id: ""
	I0723 15:22:07.373841   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.373852   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:07.373860   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:07.373928   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:07.408301   65605 cri.go:89] found id: ""
	I0723 15:22:07.408326   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.408333   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:07.408340   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:07.408350   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:07.488384   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:07.488417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.531867   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:07.531895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:07.582639   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:07.582671   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.597387   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:07.597413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:07.673185   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.173915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:10.186657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:10.186717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:10.218213   65605 cri.go:89] found id: ""
	I0723 15:22:10.218238   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.218246   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:10.218252   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:10.218302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:10.250199   65605 cri.go:89] found id: ""
	I0723 15:22:10.250228   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.250238   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:10.250245   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:10.250307   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:10.282920   65605 cri.go:89] found id: ""
	I0723 15:22:10.282947   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.282957   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:10.282965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:10.283022   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:10.317334   65605 cri.go:89] found id: ""
	I0723 15:22:10.317363   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.317372   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:10.317380   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:10.317443   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:10.350520   65605 cri.go:89] found id: ""
	I0723 15:22:10.350548   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.350559   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:10.350566   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:10.350630   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:10.381360   65605 cri.go:89] found id: ""
	I0723 15:22:10.381385   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.381392   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:10.381405   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:10.381451   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:10.413202   65605 cri.go:89] found id: ""
	I0723 15:22:10.413231   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.413239   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:10.413244   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:10.413300   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:10.447102   65605 cri.go:89] found id: ""
	I0723 15:22:10.447132   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.447143   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:10.447154   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:10.447168   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:10.496110   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:10.496141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:10.509298   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:10.509331   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:10.578938   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.578960   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:10.578975   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:10.660316   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:10.660346   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.634852   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.635205   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.905326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.906212   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.153941   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.652564   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.199119   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:13.212070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:13.212129   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:13.247646   65605 cri.go:89] found id: ""
	I0723 15:22:13.247683   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.247694   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:13.247701   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:13.247759   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:13.277875   65605 cri.go:89] found id: ""
	I0723 15:22:13.277901   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.277909   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:13.277918   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:13.277973   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:13.311499   65605 cri.go:89] found id: ""
	I0723 15:22:13.311520   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.311527   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:13.311533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:13.311587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:13.342913   65605 cri.go:89] found id: ""
	I0723 15:22:13.342944   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.342955   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:13.342963   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:13.343020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:13.380062   65605 cri.go:89] found id: ""
	I0723 15:22:13.380085   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.380092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:13.380097   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:13.380148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:13.416683   65605 cri.go:89] found id: ""
	I0723 15:22:13.416712   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.416721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:13.416728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:13.416786   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:13.451783   65605 cri.go:89] found id: ""
	I0723 15:22:13.451806   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.451813   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:13.451819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:13.451864   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:13.490456   65605 cri.go:89] found id: ""
	I0723 15:22:13.490488   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.490500   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:13.490512   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:13.490531   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:13.562391   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:13.562419   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:13.562435   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:13.639271   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:13.639330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.677457   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:13.677486   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:13.727877   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:13.727912   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.242569   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:16.255165   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:16.255237   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:16.286884   65605 cri.go:89] found id: ""
	I0723 15:22:16.286973   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.286990   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:16.286998   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:16.287070   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:16.319480   65605 cri.go:89] found id: ""
	I0723 15:22:16.319508   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.319518   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:16.319524   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:16.319590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:16.356142   65605 cri.go:89] found id: ""
	I0723 15:22:16.356176   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.356186   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:16.356193   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:16.356251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:16.393720   65605 cri.go:89] found id: ""
	I0723 15:22:16.393748   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.393756   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:16.393761   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:16.393817   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:16.429752   65605 cri.go:89] found id: ""
	I0723 15:22:16.429788   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.429800   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:16.429807   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:16.429865   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:16.463983   65605 cri.go:89] found id: ""
	I0723 15:22:16.464012   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.464023   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:16.464030   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:16.464099   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:16.497390   65605 cri.go:89] found id: ""
	I0723 15:22:16.497417   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.497428   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:16.497435   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:16.497496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:16.532460   65605 cri.go:89] found id: ""
	I0723 15:22:16.532491   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.532502   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:16.532513   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:16.532525   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:16.584455   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:16.584492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.599205   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:16.599237   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:16.672183   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:16.672207   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:16.672221   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:16.748888   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:16.748923   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:12.132681   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.134314   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.634068   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.404961   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.406911   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:15.652813   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:17.653585   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.654123   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.286407   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:19.300815   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:19.300890   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:19.341088   65605 cri.go:89] found id: ""
	I0723 15:22:19.341122   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.341133   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:19.341140   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:19.341191   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:19.375597   65605 cri.go:89] found id: ""
	I0723 15:22:19.375627   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.375635   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:19.375641   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:19.375689   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:19.412206   65605 cri.go:89] found id: ""
	I0723 15:22:19.412234   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.412244   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:19.412252   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:19.412315   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:19.445598   65605 cri.go:89] found id: ""
	I0723 15:22:19.445631   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.445645   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:19.445653   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:19.445725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:19.477766   65605 cri.go:89] found id: ""
	I0723 15:22:19.477800   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.477811   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:19.477818   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:19.477877   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:19.509935   65605 cri.go:89] found id: ""
	I0723 15:22:19.509965   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.509976   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:19.509982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:19.510039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:19.542906   65605 cri.go:89] found id: ""
	I0723 15:22:19.542936   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.542947   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:19.542954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:19.543010   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:19.575935   65605 cri.go:89] found id: ""
	I0723 15:22:19.575964   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.575975   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:19.576036   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:19.576054   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:19.625640   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:19.625674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:19.638938   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:19.638965   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:19.711019   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:19.711047   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:19.711061   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:19.787744   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:19.787781   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:19.133215   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.632570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:18.905104   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.404733   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.152487   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:24.154220   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.326500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:22.339677   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:22.339741   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:22.374593   65605 cri.go:89] found id: ""
	I0723 15:22:22.374630   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.374641   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:22.374649   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:22.374713   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:22.408064   65605 cri.go:89] found id: ""
	I0723 15:22:22.408089   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.408099   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:22.408106   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:22.408166   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:22.442923   65605 cri.go:89] found id: ""
	I0723 15:22:22.442956   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.442968   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:22.442976   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:22.443038   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:22.476003   65605 cri.go:89] found id: ""
	I0723 15:22:22.476027   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.476036   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:22.476043   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:22.476109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:22.508221   65605 cri.go:89] found id: ""
	I0723 15:22:22.508253   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.508260   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:22.508268   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:22.508328   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:22.540748   65605 cri.go:89] found id: ""
	I0723 15:22:22.540778   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.540789   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:22.540797   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:22.540857   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:22.576000   65605 cri.go:89] found id: ""
	I0723 15:22:22.576028   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.576038   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:22.576044   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:22.576102   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:22.614295   65605 cri.go:89] found id: ""
	I0723 15:22:22.614325   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.614335   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:22.614346   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:22.614361   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:22.627447   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:22.627481   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:22.701142   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:22.701172   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:22.701188   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:22.788487   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:22.788523   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.831107   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:22.831136   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.382886   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:25.396072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:25.396147   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:25.432414   65605 cri.go:89] found id: ""
	I0723 15:22:25.432443   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.432454   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:25.432482   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:25.432554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:25.466375   65605 cri.go:89] found id: ""
	I0723 15:22:25.466421   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.466429   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:25.466434   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:25.466488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:25.502512   65605 cri.go:89] found id: ""
	I0723 15:22:25.502536   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.502545   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:25.502553   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:25.502624   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:25.535953   65605 cri.go:89] found id: ""
	I0723 15:22:25.535975   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.535984   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:25.535991   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:25.536051   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:25.569217   65605 cri.go:89] found id: ""
	I0723 15:22:25.569250   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.569261   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:25.569269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:25.569331   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:25.602317   65605 cri.go:89] found id: ""
	I0723 15:22:25.602341   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.602350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:25.602360   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:25.602433   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:25.636959   65605 cri.go:89] found id: ""
	I0723 15:22:25.636984   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.636994   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:25.637001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:25.637059   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:25.671719   65605 cri.go:89] found id: ""
	I0723 15:22:25.671753   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.671764   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:25.671775   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:25.671789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.720509   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:25.720540   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:25.733097   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:25.733121   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:25.809365   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:25.809393   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:25.809409   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:25.890663   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:25.890700   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:23.634537   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.133073   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:23.905075   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:25.905102   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:27.905390   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.653893   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.660981   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.430884   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:28.444825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:28.444882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:28.477510   65605 cri.go:89] found id: ""
	I0723 15:22:28.477533   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.477540   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:28.477546   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:28.477611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:28.515395   65605 cri.go:89] found id: ""
	I0723 15:22:28.515424   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.515434   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:28.515440   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:28.515498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:28.554144   65605 cri.go:89] found id: ""
	I0723 15:22:28.554169   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.554176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:28.554185   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:28.554239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:28.588756   65605 cri.go:89] found id: ""
	I0723 15:22:28.588783   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.588794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:28.588801   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:28.588861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:28.623278   65605 cri.go:89] found id: ""
	I0723 15:22:28.623305   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.623313   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:28.623318   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:28.623372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:28.666802   65605 cri.go:89] found id: ""
	I0723 15:22:28.666831   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.666840   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:28.666847   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:28.666906   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:28.697712   65605 cri.go:89] found id: ""
	I0723 15:22:28.697736   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.697744   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:28.697749   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:28.697803   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:28.730296   65605 cri.go:89] found id: ""
	I0723 15:22:28.730333   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.730340   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:28.730349   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:28.730360   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.779381   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:28.779417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:28.792687   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:28.792718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:28.859483   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:28.859508   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:28.859537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:28.933792   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:28.933824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.474653   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:31.488537   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:31.488602   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:31.522785   65605 cri.go:89] found id: ""
	I0723 15:22:31.522816   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.522826   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:31.522834   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:31.522901   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:31.554448   65605 cri.go:89] found id: ""
	I0723 15:22:31.554493   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.554503   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:31.554508   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:31.554568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:31.587456   65605 cri.go:89] found id: ""
	I0723 15:22:31.587479   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.587486   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:31.587492   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:31.587549   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:31.625604   65605 cri.go:89] found id: ""
	I0723 15:22:31.625632   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.625640   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:31.625646   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:31.625696   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:31.661266   65605 cri.go:89] found id: ""
	I0723 15:22:31.661298   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.661304   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:31.661309   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:31.661364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:31.696942   65605 cri.go:89] found id: ""
	I0723 15:22:31.696974   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.696984   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:31.696992   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:31.697055   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:31.730706   65605 cri.go:89] found id: ""
	I0723 15:22:31.730730   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.730738   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:31.730743   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:31.730789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:31.762778   65605 cri.go:89] found id: ""
	I0723 15:22:31.762802   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.762810   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:31.762818   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:31.762829   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.804789   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:31.804814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:30.133732   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:29.906482   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:32.404579   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.152594   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:33.154059   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.854481   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:31.854514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:31.867003   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:31.867028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:31.942544   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:31.942565   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:31.942576   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.519437   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:34.531879   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:34.531941   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:34.565547   65605 cri.go:89] found id: ""
	I0723 15:22:34.565572   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.565580   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:34.565585   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:34.565634   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:34.597865   65605 cri.go:89] found id: ""
	I0723 15:22:34.597892   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.597902   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:34.597908   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:34.597968   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:34.633153   65605 cri.go:89] found id: ""
	I0723 15:22:34.633176   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.633185   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:34.633192   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:34.633251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:34.668464   65605 cri.go:89] found id: ""
	I0723 15:22:34.668486   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.668496   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:34.668502   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:34.668573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:34.700358   65605 cri.go:89] found id: ""
	I0723 15:22:34.700401   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.700412   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:34.700422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:34.700495   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:34.731774   65605 cri.go:89] found id: ""
	I0723 15:22:34.731807   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.731819   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:34.731828   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:34.731902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:34.764204   65605 cri.go:89] found id: ""
	I0723 15:22:34.764232   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.764243   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:34.764251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:34.764311   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:34.794103   65605 cri.go:89] found id: ""
	I0723 15:22:34.794131   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.794139   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:34.794149   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:34.794165   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:34.868038   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:34.868063   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:34.868076   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.958254   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:34.958291   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:35.004649   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:35.004681   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:35.055496   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:35.055537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:32.632017   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.634515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.405341   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:36.905094   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:35.652935   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.654130   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:40.153533   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.569938   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:37.582561   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:37.582629   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:37.613053   65605 cri.go:89] found id: ""
	I0723 15:22:37.613081   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.613090   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:37.613096   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:37.613161   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:37.649239   65605 cri.go:89] found id: ""
	I0723 15:22:37.649270   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.649279   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:37.649286   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:37.649372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:37.685110   65605 cri.go:89] found id: ""
	I0723 15:22:37.685137   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.685145   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:37.685150   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:37.685201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:37.718210   65605 cri.go:89] found id: ""
	I0723 15:22:37.718231   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.718239   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:37.718245   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:37.718297   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:37.751192   65605 cri.go:89] found id: ""
	I0723 15:22:37.751224   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.751234   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:37.751241   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:37.751294   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:37.781569   65605 cri.go:89] found id: ""
	I0723 15:22:37.781597   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.781607   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:37.781614   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:37.781680   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:37.812886   65605 cri.go:89] found id: ""
	I0723 15:22:37.812916   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.812927   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:37.812934   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:37.812994   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:37.844065   65605 cri.go:89] found id: ""
	I0723 15:22:37.844094   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.844104   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:37.844114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:37.844128   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.857216   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:37.857244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:37.926781   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:37.926807   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:37.926824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:38.007510   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:38.007544   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:38.045404   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:38.045437   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:40.594590   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:40.607099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:40.607157   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:40.660888   65605 cri.go:89] found id: ""
	I0723 15:22:40.660915   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.660926   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:40.660933   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:40.660992   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:40.698276   65605 cri.go:89] found id: ""
	I0723 15:22:40.698302   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.698310   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:40.698317   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:40.698411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:40.733515   65605 cri.go:89] found id: ""
	I0723 15:22:40.733542   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.733552   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:40.733560   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:40.733619   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:40.765501   65605 cri.go:89] found id: ""
	I0723 15:22:40.765530   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.765541   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:40.765548   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:40.765600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:40.800660   65605 cri.go:89] found id: ""
	I0723 15:22:40.800686   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.800693   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:40.800698   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:40.800744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:40.836084   65605 cri.go:89] found id: ""
	I0723 15:22:40.836111   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.836119   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:40.836125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:40.836179   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:40.872567   65605 cri.go:89] found id: ""
	I0723 15:22:40.872593   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.872601   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:40.872607   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:40.872665   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:40.907965   65605 cri.go:89] found id: ""
	I0723 15:22:40.907995   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.908006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:40.908017   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:40.908032   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:40.977078   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:40.977105   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:40.977124   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:41.059589   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:41.059634   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:41.097934   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:41.097968   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:41.151322   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:41.151365   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.133207   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.133345   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.633631   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.407087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.904675   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:42.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:44.653650   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.665956   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:43.678808   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:43.678882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:43.711311   65605 cri.go:89] found id: ""
	I0723 15:22:43.711346   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.711356   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:43.711363   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:43.711415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:43.745203   65605 cri.go:89] found id: ""
	I0723 15:22:43.745226   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.745233   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:43.745239   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:43.745303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:43.778815   65605 cri.go:89] found id: ""
	I0723 15:22:43.778851   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.778861   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:43.778868   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:43.778926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:43.812497   65605 cri.go:89] found id: ""
	I0723 15:22:43.812528   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.812538   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:43.812544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:43.812595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:43.849568   65605 cri.go:89] found id: ""
	I0723 15:22:43.849595   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.849607   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:43.849621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:43.849784   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:43.883486   65605 cri.go:89] found id: ""
	I0723 15:22:43.883515   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.883527   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:43.883535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:43.883603   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:43.917301   65605 cri.go:89] found id: ""
	I0723 15:22:43.917321   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.917328   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:43.917333   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:43.917388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:43.951808   65605 cri.go:89] found id: ""
	I0723 15:22:43.951835   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.951844   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:43.951853   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:43.951864   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:44.001416   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:44.001448   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:44.014680   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:44.014708   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:44.086008   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:44.086033   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:44.086048   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:44.174647   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:44.174679   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:46.716916   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:46.730403   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:46.730473   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:46.765297   65605 cri.go:89] found id: ""
	I0723 15:22:46.765332   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.765348   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:46.765355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:46.765417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:46.798193   65605 cri.go:89] found id: ""
	I0723 15:22:46.798225   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.798235   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:46.798242   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:46.798309   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:46.830977   65605 cri.go:89] found id: ""
	I0723 15:22:46.831003   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.831015   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:46.831022   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:46.831093   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:44.135515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.633440   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.404399   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.655329   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.660172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.867414   65605 cri.go:89] found id: ""
	I0723 15:22:46.867441   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.867452   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:46.867459   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:46.867524   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:46.903782   65605 cri.go:89] found id: ""
	I0723 15:22:46.903810   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.903823   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:46.903830   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:46.903912   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:46.936451   65605 cri.go:89] found id: ""
	I0723 15:22:46.936479   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.936486   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:46.936491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:46.936538   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:46.970263   65605 cri.go:89] found id: ""
	I0723 15:22:46.970289   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.970297   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:46.970302   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:46.970370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:47.005023   65605 cri.go:89] found id: ""
	I0723 15:22:47.005055   65605 logs.go:276] 0 containers: []
	W0723 15:22:47.005065   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:47.005074   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:47.005087   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:47.102350   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:47.102398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:47.102432   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:47.194243   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:47.194277   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:47.235510   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:47.235543   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:47.285177   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:47.285208   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:49.799825   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:49.813159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:49.813218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:49.844937   65605 cri.go:89] found id: ""
	I0723 15:22:49.844966   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.844974   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:49.844979   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:49.845039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:49.880236   65605 cri.go:89] found id: ""
	I0723 15:22:49.880265   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.880276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:49.880283   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:49.880344   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:49.914260   65605 cri.go:89] found id: ""
	I0723 15:22:49.914289   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.914298   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:49.914306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:49.914360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:49.948948   65605 cri.go:89] found id: ""
	I0723 15:22:49.948979   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.948987   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:49.948994   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:49.949049   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:49.982841   65605 cri.go:89] found id: ""
	I0723 15:22:49.982867   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.982876   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:49.982881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:49.982926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:50.018255   65605 cri.go:89] found id: ""
	I0723 15:22:50.018286   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.018297   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:50.018315   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:50.018366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:50.054476   65605 cri.go:89] found id: ""
	I0723 15:22:50.054505   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.054515   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:50.054521   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:50.054582   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:50.088017   65605 cri.go:89] found id: ""
	I0723 15:22:50.088050   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.088060   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:50.088072   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:50.088086   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:50.140460   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:50.140494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:50.155334   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:50.155371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:50.230361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:50.230401   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:50.230419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:50.307742   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:50.307789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:48.635238   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.133390   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.406535   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:50.904921   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.905910   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.152686   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:53.153547   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.847520   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:52.868334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:52.868400   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:52.905903   65605 cri.go:89] found id: ""
	I0723 15:22:52.905930   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.905941   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:52.905948   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:52.906006   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:52.940644   65605 cri.go:89] found id: ""
	I0723 15:22:52.940672   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.940683   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:52.940690   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:52.940752   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:52.973581   65605 cri.go:89] found id: ""
	I0723 15:22:52.973607   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.973615   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:52.973621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:52.973682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:53.007004   65605 cri.go:89] found id: ""
	I0723 15:22:53.007032   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.007040   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:53.007046   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:53.007100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:53.040346   65605 cri.go:89] found id: ""
	I0723 15:22:53.040374   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.040385   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:53.040392   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:53.040455   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:53.073620   65605 cri.go:89] found id: ""
	I0723 15:22:53.073653   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.073662   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:53.073668   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:53.073717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:53.108895   65605 cri.go:89] found id: ""
	I0723 15:22:53.108929   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.108941   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:53.108949   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:53.109014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:53.144145   65605 cri.go:89] found id: ""
	I0723 15:22:53.144171   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.144179   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:53.144190   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:53.144207   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:53.181580   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:53.181617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:53.235261   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:53.235292   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:53.249317   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:53.249352   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:53.317382   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:53.317403   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:53.317419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:55.899766   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:55.913612   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:55.913685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:55.945832   65605 cri.go:89] found id: ""
	I0723 15:22:55.945865   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.945877   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:55.945884   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:55.945939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:55.977489   65605 cri.go:89] found id: ""
	I0723 15:22:55.977522   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.977533   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:55.977546   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:55.977607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:56.011727   65605 cri.go:89] found id: ""
	I0723 15:22:56.011758   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.011770   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:56.011781   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:56.011850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:56.044046   65605 cri.go:89] found id: ""
	I0723 15:22:56.044076   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.044086   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:56.044093   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:56.044148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:56.078615   65605 cri.go:89] found id: ""
	I0723 15:22:56.078638   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.078644   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:56.078649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:56.078702   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:56.112720   65605 cri.go:89] found id: ""
	I0723 15:22:56.112746   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.112754   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:56.112759   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:56.112807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:56.146436   65605 cri.go:89] found id: ""
	I0723 15:22:56.146464   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.146475   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:56.146483   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:56.146545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:56.179819   65605 cri.go:89] found id: ""
	I0723 15:22:56.179850   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.179859   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:56.179868   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:56.179885   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:56.219608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:56.219636   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:56.268158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:56.268192   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:56.281422   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:56.281449   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:56.351169   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:56.351190   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:56.351206   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:53.133444   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.632360   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.404787   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.905423   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.652504   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.655049   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:58.933585   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:58.946516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:58.946607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:58.980970   65605 cri.go:89] found id: ""
	I0723 15:22:58.980994   65605 logs.go:276] 0 containers: []
	W0723 15:22:58.981004   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:58.981012   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:58.981083   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:59.019301   65605 cri.go:89] found id: ""
	I0723 15:22:59.019337   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.019352   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:59.019360   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:59.019417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:59.053653   65605 cri.go:89] found id: ""
	I0723 15:22:59.053677   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.053685   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:59.053690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:59.053745   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:59.086737   65605 cri.go:89] found id: ""
	I0723 15:22:59.086764   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.086772   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:59.086778   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:59.086833   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:59.120689   65605 cri.go:89] found id: ""
	I0723 15:22:59.120717   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.120725   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:59.120731   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:59.120793   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:59.157267   65605 cri.go:89] found id: ""
	I0723 15:22:59.157305   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.157313   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:59.157319   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:59.157370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:59.193432   65605 cri.go:89] found id: ""
	I0723 15:22:59.193457   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.193468   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:59.193474   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:59.193518   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:59.227501   65605 cri.go:89] found id: ""
	I0723 15:22:59.227528   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.227535   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:59.227544   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:59.227555   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:59.314420   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:59.314465   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:59.354311   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:59.354354   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:59.406158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:59.406189   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:59.419244   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:59.419270   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:59.494399   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:57.632469   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:00.133084   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.905483   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.406340   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.154105   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.655454   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:01.995403   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:02.008395   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:02.008459   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:02.041952   65605 cri.go:89] found id: ""
	I0723 15:23:02.041979   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.041989   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:02.041995   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:02.042061   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:02.079353   65605 cri.go:89] found id: ""
	I0723 15:23:02.079383   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.079390   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:02.079397   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:02.079453   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:02.114222   65605 cri.go:89] found id: ""
	I0723 15:23:02.114251   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.114261   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:02.114269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:02.114350   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:02.146563   65605 cri.go:89] found id: ""
	I0723 15:23:02.146591   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.146603   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:02.146610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:02.146675   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:02.184401   65605 cri.go:89] found id: ""
	I0723 15:23:02.184428   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.184436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:02.184442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:02.184489   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:02.221304   65605 cri.go:89] found id: ""
	I0723 15:23:02.221339   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.221350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:02.221358   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:02.221424   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:02.266255   65605 cri.go:89] found id: ""
	I0723 15:23:02.266280   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.266288   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:02.266308   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:02.266364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:02.302038   65605 cri.go:89] found id: ""
	I0723 15:23:02.302064   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.302075   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:02.302085   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:02.302102   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.352709   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:02.352743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:02.366113   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:02.366141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:02.433621   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:02.433658   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:02.433674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:02.512443   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:02.512479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.051227   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:05.063634   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:05.063704   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:05.099833   65605 cri.go:89] found id: ""
	I0723 15:23:05.099862   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.099872   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:05.099880   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:05.099942   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:05.136009   65605 cri.go:89] found id: ""
	I0723 15:23:05.136030   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.136036   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:05.136042   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:05.136089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:05.171390   65605 cri.go:89] found id: ""
	I0723 15:23:05.171423   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.171434   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:05.171441   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:05.171497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:05.210193   65605 cri.go:89] found id: ""
	I0723 15:23:05.210220   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.210229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:05.210236   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:05.210318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:05.243266   65605 cri.go:89] found id: ""
	I0723 15:23:05.243290   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.243298   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:05.243304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:05.243368   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:05.273795   65605 cri.go:89] found id: ""
	I0723 15:23:05.273826   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.273835   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:05.273842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:05.273918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:05.305498   65605 cri.go:89] found id: ""
	I0723 15:23:05.305521   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.305528   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:05.305533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:05.305587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:05.337867   65605 cri.go:89] found id: ""
	I0723 15:23:05.337894   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.337905   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:05.337917   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:05.337934   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:05.353531   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:05.353564   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:05.419605   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:05.419630   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:05.419644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:05.503361   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:05.503395   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.539514   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:05.539547   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.633357   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.633516   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.904960   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.913789   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.657437   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.660064   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.091151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:08.103930   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:08.104007   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:08.136853   65605 cri.go:89] found id: ""
	I0723 15:23:08.136874   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.136881   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:08.136887   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:08.136940   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:08.171525   65605 cri.go:89] found id: ""
	I0723 15:23:08.171556   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.171577   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:08.171584   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:08.171652   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:08.205887   65605 cri.go:89] found id: ""
	I0723 15:23:08.205919   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.205930   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:08.205940   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:08.206001   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:08.238304   65605 cri.go:89] found id: ""
	I0723 15:23:08.238329   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.238337   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:08.238342   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:08.238411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:08.270162   65605 cri.go:89] found id: ""
	I0723 15:23:08.270194   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.270203   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:08.270211   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:08.270273   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:08.312963   65605 cri.go:89] found id: ""
	I0723 15:23:08.312991   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.312999   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:08.313005   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:08.313065   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:08.345211   65605 cri.go:89] found id: ""
	I0723 15:23:08.345246   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.345258   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:08.345267   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:08.345326   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:08.381355   65605 cri.go:89] found id: ""
	I0723 15:23:08.381390   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.381399   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:08.381409   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:08.381421   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.436680   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:08.436718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:08.450210   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:08.450245   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:08.517469   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:08.517490   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:08.517504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:08.603147   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:08.603185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:11.142363   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:11.158204   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:11.158278   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:11.197181   65605 cri.go:89] found id: ""
	I0723 15:23:11.197211   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.197227   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:11.197234   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:11.197302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:11.232698   65605 cri.go:89] found id: ""
	I0723 15:23:11.232726   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.232736   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:11.232742   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:11.232801   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:11.263268   65605 cri.go:89] found id: ""
	I0723 15:23:11.263293   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.263301   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:11.263306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:11.263363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:11.294213   65605 cri.go:89] found id: ""
	I0723 15:23:11.294242   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.294254   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:11.294261   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:11.294340   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:11.324721   65605 cri.go:89] found id: ""
	I0723 15:23:11.324753   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.324766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:11.324773   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:11.324834   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:11.356563   65605 cri.go:89] found id: ""
	I0723 15:23:11.356595   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.356606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:11.356620   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:11.356685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:11.387818   65605 cri.go:89] found id: ""
	I0723 15:23:11.387850   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.387859   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:11.387866   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:11.387926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:11.422612   65605 cri.go:89] found id: ""
	I0723 15:23:11.422639   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.422649   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:11.422659   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:11.422672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:11.475997   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:11.476028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:11.489064   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:11.489095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:11.557384   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:11.557408   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:11.557427   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:11.636906   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:11.636933   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:07.134834   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.636699   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.405125   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.406702   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.153281   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.153390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.154674   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.176790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:14.190898   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:14.190972   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:14.225264   65605 cri.go:89] found id: ""
	I0723 15:23:14.225297   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.225308   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:14.225314   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:14.225378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:14.257092   65605 cri.go:89] found id: ""
	I0723 15:23:14.257119   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.257132   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:14.257138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:14.257201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:14.291068   65605 cri.go:89] found id: ""
	I0723 15:23:14.291095   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.291104   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:14.291111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:14.291170   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:14.324840   65605 cri.go:89] found id: ""
	I0723 15:23:14.324872   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.324881   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:14.324888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:14.324948   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:14.358228   65605 cri.go:89] found id: ""
	I0723 15:23:14.358258   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.358268   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:14.358275   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:14.358333   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:14.389136   65605 cri.go:89] found id: ""
	I0723 15:23:14.389164   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.389174   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:14.389181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:14.389241   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:14.424386   65605 cri.go:89] found id: ""
	I0723 15:23:14.424413   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.424424   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:14.424432   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:14.424492   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:14.457206   65605 cri.go:89] found id: ""
	I0723 15:23:14.457234   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.457244   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:14.457254   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:14.457265   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:14.535708   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:14.535742   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.573579   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:14.573603   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:14.627966   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:14.627994   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:14.641305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:14.641332   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:14.723499   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:12.133966   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.633521   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:16.633785   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.905045   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.653465   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:19.653755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.224268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:17.236467   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:17.236530   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:17.269668   65605 cri.go:89] found id: ""
	I0723 15:23:17.269697   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.269704   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:17.269709   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:17.269753   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:17.300573   65605 cri.go:89] found id: ""
	I0723 15:23:17.300596   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.300603   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:17.300608   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:17.300655   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:17.332627   65605 cri.go:89] found id: ""
	I0723 15:23:17.332653   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.332661   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:17.332666   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:17.332716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:17.363759   65605 cri.go:89] found id: ""
	I0723 15:23:17.363786   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.363794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:17.363799   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:17.363854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:17.396986   65605 cri.go:89] found id: ""
	I0723 15:23:17.397016   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.397023   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:17.397031   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:17.397089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:17.435454   65605 cri.go:89] found id: ""
	I0723 15:23:17.435478   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.435488   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:17.435495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:17.435551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:17.469529   65605 cri.go:89] found id: ""
	I0723 15:23:17.469570   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.469581   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:17.469589   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:17.469654   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:17.505356   65605 cri.go:89] found id: ""
	I0723 15:23:17.505384   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.505395   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:17.505405   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:17.505420   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:17.548656   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:17.548682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:17.602439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:17.602471   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:17.614872   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:17.614902   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:17.684914   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.684939   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:17.684958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.271384   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:20.284619   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:20.284682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:20.319522   65605 cri.go:89] found id: ""
	I0723 15:23:20.319545   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.319552   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:20.319557   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:20.319608   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:20.357359   65605 cri.go:89] found id: ""
	I0723 15:23:20.357385   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.357393   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:20.357399   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:20.357444   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:20.390651   65605 cri.go:89] found id: ""
	I0723 15:23:20.390680   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.390692   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:20.390699   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:20.390757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:20.425243   65605 cri.go:89] found id: ""
	I0723 15:23:20.425274   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.425288   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:20.425295   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:20.425367   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:20.459665   65605 cri.go:89] found id: ""
	I0723 15:23:20.459687   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.459694   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:20.459700   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:20.459749   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:20.494836   65605 cri.go:89] found id: ""
	I0723 15:23:20.494869   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.494879   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:20.494887   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:20.494946   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:20.528807   65605 cri.go:89] found id: ""
	I0723 15:23:20.528839   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.528847   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:20.528854   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:20.528904   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:20.563111   65605 cri.go:89] found id: ""
	I0723 15:23:20.563139   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.563148   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:20.563160   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:20.563175   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:20.576259   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:20.576290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:20.641528   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:20.641551   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:20.641565   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.717413   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:20.717452   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:20.756832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:20.756858   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:19.133570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:21.133680   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:18.404406   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:20.405712   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.904785   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.153273   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:24.654959   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:23.308839   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:23.322122   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:23.322203   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:23.353454   65605 cri.go:89] found id: ""
	I0723 15:23:23.353483   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.353491   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:23.353496   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:23.353550   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:23.385194   65605 cri.go:89] found id: ""
	I0723 15:23:23.385218   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.385226   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:23.385231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:23.385286   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:23.420259   65605 cri.go:89] found id: ""
	I0723 15:23:23.420287   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.420295   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:23.420301   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:23.420366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:23.453107   65605 cri.go:89] found id: ""
	I0723 15:23:23.453134   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.453145   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:23.453152   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:23.453208   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:23.485147   65605 cri.go:89] found id: ""
	I0723 15:23:23.485178   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.485185   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:23.485191   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:23.485239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:23.516682   65605 cri.go:89] found id: ""
	I0723 15:23:23.516709   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.516721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:23.516729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:23.516855   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:23.552804   65605 cri.go:89] found id: ""
	I0723 15:23:23.552836   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.552846   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:23.552853   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:23.552916   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:23.585951   65605 cri.go:89] found id: ""
	I0723 15:23:23.585977   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.585988   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:23.586000   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:23.586014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.641439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:23.641469   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:23.655213   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:23.655243   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:23.726461   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:23.726482   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:23.726496   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:23.806530   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:23.806572   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.346727   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:26.359785   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:26.359854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:26.394547   65605 cri.go:89] found id: ""
	I0723 15:23:26.394583   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.394593   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:26.394600   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:26.394660   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:26.429602   65605 cri.go:89] found id: ""
	I0723 15:23:26.429632   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.429640   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:26.429646   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:26.429735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:26.461875   65605 cri.go:89] found id: ""
	I0723 15:23:26.461902   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.461909   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:26.461916   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:26.461987   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:26.494721   65605 cri.go:89] found id: ""
	I0723 15:23:26.494743   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.494751   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:26.494756   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:26.494802   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:26.530828   65605 cri.go:89] found id: ""
	I0723 15:23:26.530854   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.530863   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:26.530871   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:26.530939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:26.564508   65605 cri.go:89] found id: ""
	I0723 15:23:26.564540   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.564551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:26.564558   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:26.564618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:26.599354   65605 cri.go:89] found id: ""
	I0723 15:23:26.599378   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.599387   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:26.599393   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:26.599460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:26.654360   65605 cri.go:89] found id: ""
	I0723 15:23:26.654409   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.654420   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:26.654429   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:26.654446   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:26.722180   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:26.722212   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:26.722226   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:26.803291   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:26.803324   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.842829   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:26.842860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.633887   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:25.406139   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:27.905699   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.656334   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:29.153898   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.896814   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:26.896854   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.411463   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:29.424509   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:29.424574   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:29.458014   65605 cri.go:89] found id: ""
	I0723 15:23:29.458042   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.458049   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:29.458055   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:29.458108   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:29.492762   65605 cri.go:89] found id: ""
	I0723 15:23:29.492792   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.492802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:29.492809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:29.492862   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:29.526807   65605 cri.go:89] found id: ""
	I0723 15:23:29.526840   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.526851   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:29.526858   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:29.526922   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:29.560110   65605 cri.go:89] found id: ""
	I0723 15:23:29.560133   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.560140   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:29.560146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:29.560195   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:29.596287   65605 cri.go:89] found id: ""
	I0723 15:23:29.596317   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.596327   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:29.596334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:29.596389   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:29.629292   65605 cri.go:89] found id: ""
	I0723 15:23:29.629338   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.629345   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:29.629353   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:29.629404   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:29.666018   65605 cri.go:89] found id: ""
	I0723 15:23:29.666048   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.666058   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:29.666065   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:29.666131   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:29.699967   65605 cri.go:89] found id: ""
	I0723 15:23:29.699996   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.700006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:29.700018   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:29.700034   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:29.749759   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:29.749792   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.763116   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:29.763142   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:29.836309   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:29.836332   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:29.836343   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:29.916337   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:29.916371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:28.633677   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.132726   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:30.405168   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.905063   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.653297   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:33.653432   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.463927   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:32.477072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:32.477150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:32.509915   65605 cri.go:89] found id: ""
	I0723 15:23:32.509938   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.509945   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:32.509952   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:32.510000   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:32.543302   65605 cri.go:89] found id: ""
	I0723 15:23:32.543344   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.543360   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:32.543368   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:32.543438   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:32.579516   65605 cri.go:89] found id: ""
	I0723 15:23:32.579544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.579555   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:32.579562   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:32.579621   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:32.613175   65605 cri.go:89] found id: ""
	I0723 15:23:32.613210   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.613218   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:32.613224   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:32.613282   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:32.646801   65605 cri.go:89] found id: ""
	I0723 15:23:32.646826   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.646835   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:32.646842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:32.646902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:32.683518   65605 cri.go:89] found id: ""
	I0723 15:23:32.683544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.683551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:32.683556   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:32.683611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:32.719448   65605 cri.go:89] found id: ""
	I0723 15:23:32.719475   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.719485   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:32.719490   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:32.719568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:32.752706   65605 cri.go:89] found id: ""
	I0723 15:23:32.752731   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.752738   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:32.752747   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:32.752757   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.800191   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:32.800220   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:32.850990   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:32.851025   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:32.863700   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:32.863729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:32.928054   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:32.928080   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:32.928095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:35.507452   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:35.520681   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:35.520760   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:35.554642   65605 cri.go:89] found id: ""
	I0723 15:23:35.554668   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.554680   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:35.554687   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:35.554750   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:35.585970   65605 cri.go:89] found id: ""
	I0723 15:23:35.585994   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.586004   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:35.586011   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:35.586069   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:35.625178   65605 cri.go:89] found id: ""
	I0723 15:23:35.625202   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.625212   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:35.625226   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:35.625274   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:35.658618   65605 cri.go:89] found id: ""
	I0723 15:23:35.658647   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.658666   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:35.658682   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:35.658742   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:35.696724   65605 cri.go:89] found id: ""
	I0723 15:23:35.696760   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.696768   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:35.696774   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:35.696825   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:35.728399   65605 cri.go:89] found id: ""
	I0723 15:23:35.728426   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.728435   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:35.728440   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:35.728496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:35.758374   65605 cri.go:89] found id: ""
	I0723 15:23:35.758419   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.758429   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:35.758436   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:35.758497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:35.789013   65605 cri.go:89] found id: ""
	I0723 15:23:35.789041   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.789050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:35.789058   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:35.789069   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:35.843703   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:35.843739   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:35.856489   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:35.856514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:35.926784   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:35.926804   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:35.926819   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:36.009552   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:36.009591   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:33.632247   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.633037   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.404984   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:37.905720   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.653742   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.154008   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.545830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:38.560412   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:38.560491   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:38.596495   65605 cri.go:89] found id: ""
	I0723 15:23:38.596521   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.596532   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:38.596538   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:38.596587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:38.635068   65605 cri.go:89] found id: ""
	I0723 15:23:38.635095   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.635104   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:38.635109   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:38.635180   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:38.675832   65605 cri.go:89] found id: ""
	I0723 15:23:38.675876   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.675891   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:38.675897   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:38.675956   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:38.711052   65605 cri.go:89] found id: ""
	I0723 15:23:38.711080   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.711100   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:38.711108   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:38.711171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:38.749437   65605 cri.go:89] found id: ""
	I0723 15:23:38.749479   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.749490   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:38.749498   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:38.749554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:38.790721   65605 cri.go:89] found id: ""
	I0723 15:23:38.790743   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.790751   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:38.790758   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:38.790818   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:38.840127   65605 cri.go:89] found id: ""
	I0723 15:23:38.840156   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.840167   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:38.840174   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:38.840233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:38.895252   65605 cri.go:89] found id: ""
	I0723 15:23:38.895281   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.895291   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:38.895301   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:38.895317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.933441   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:38.933479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:38.987128   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:38.987160   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:39.001547   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:39.001578   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:39.070363   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:39.070398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:39.070413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:41.648668   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:41.664247   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:41.664303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:41.697926   65605 cri.go:89] found id: ""
	I0723 15:23:41.697954   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.697962   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:41.697967   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:41.698014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:41.735306   65605 cri.go:89] found id: ""
	I0723 15:23:41.735336   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.735347   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:41.735355   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:41.735413   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:41.773005   65605 cri.go:89] found id: ""
	I0723 15:23:41.773030   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.773040   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:41.773047   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:41.773105   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:41.806683   65605 cri.go:89] found id: ""
	I0723 15:23:41.806711   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.806722   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:41.806729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:41.806779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:41.842021   65605 cri.go:89] found id: ""
	I0723 15:23:41.842047   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.842063   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:41.842070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:41.842130   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:37.633918   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.132895   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:39.906489   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.405244   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.652778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.656127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:45.155065   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:41.874772   65605 cri.go:89] found id: ""
	I0723 15:23:41.874802   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.874812   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:41.874819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:41.874883   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:41.908618   65605 cri.go:89] found id: ""
	I0723 15:23:41.908643   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.908651   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:41.908656   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:41.908705   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:41.942529   65605 cri.go:89] found id: ""
	I0723 15:23:41.942562   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.942573   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:41.942586   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:41.942601   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:41.995763   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:41.995820   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:42.009263   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:42.009290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:42.076948   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:42.076970   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:42.076989   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:42.157399   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:42.157442   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:44.699439   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:44.712779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:44.712850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:44.746666   65605 cri.go:89] found id: ""
	I0723 15:23:44.746692   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.746701   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:44.746713   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:44.746775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:44.780144   65605 cri.go:89] found id: ""
	I0723 15:23:44.780171   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.780178   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:44.780184   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:44.780240   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:44.816646   65605 cri.go:89] found id: ""
	I0723 15:23:44.816676   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.816688   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:44.816696   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:44.816830   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:44.848830   65605 cri.go:89] found id: ""
	I0723 15:23:44.848860   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.848873   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:44.848880   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:44.848945   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:44.882216   65605 cri.go:89] found id: ""
	I0723 15:23:44.882252   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.882265   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:44.882274   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:44.882363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:44.915894   65605 cri.go:89] found id: ""
	I0723 15:23:44.915921   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.915930   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:44.915937   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:44.916003   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:44.948902   65605 cri.go:89] found id: ""
	I0723 15:23:44.948936   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.948954   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:44.948964   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:44.949034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:44.981658   65605 cri.go:89] found id: ""
	I0723 15:23:44.981685   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.981698   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:44.981709   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:44.981724   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:45.034030   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:45.034063   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:45.047545   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:45.047577   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:45.113885   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:45.113905   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:45.113917   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:45.195865   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:45.195907   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:42.133464   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.633278   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.633730   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.406233   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.904918   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.156318   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:49.653208   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.740466   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:47.752890   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:47.752958   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:47.786124   65605 cri.go:89] found id: ""
	I0723 15:23:47.786149   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.786157   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:47.786162   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:47.786211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:47.818051   65605 cri.go:89] found id: ""
	I0723 15:23:47.818073   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.818081   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:47.818086   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:47.818134   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:47.854144   65605 cri.go:89] found id: ""
	I0723 15:23:47.854168   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.854176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:47.854181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:47.854226   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:47.885781   65605 cri.go:89] found id: ""
	I0723 15:23:47.885809   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.885819   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:47.885826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:47.885888   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:47.917809   65605 cri.go:89] found id: ""
	I0723 15:23:47.917840   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.917850   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:47.917857   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:47.917921   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:47.950041   65605 cri.go:89] found id: ""
	I0723 15:23:47.950069   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.950078   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:47.950085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:47.950145   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:47.983108   65605 cri.go:89] found id: ""
	I0723 15:23:47.983143   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.983154   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:47.983163   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:47.983232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:48.014560   65605 cri.go:89] found id: ""
	I0723 15:23:48.014604   65605 logs.go:276] 0 containers: []
	W0723 15:23:48.014612   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:48.014621   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:48.014638   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:48.027469   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:48.027494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:48.097571   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:48.097601   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:48.097615   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:48.178586   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:48.178618   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:48.215769   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:48.215794   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:50.768087   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:50.781396   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:50.781467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:50.817297   65605 cri.go:89] found id: ""
	I0723 15:23:50.817327   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.817335   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:50.817341   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:50.817388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:50.850439   65605 cri.go:89] found id: ""
	I0723 15:23:50.850467   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.850476   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:50.850483   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:50.850552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:50.884601   65605 cri.go:89] found id: ""
	I0723 15:23:50.884630   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.884641   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:50.884649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:50.884714   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:50.918971   65605 cri.go:89] found id: ""
	I0723 15:23:50.918996   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.919004   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:50.919010   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:50.919072   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:50.951244   65605 cri.go:89] found id: ""
	I0723 15:23:50.951277   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.951284   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:50.951290   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:50.951360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:50.983289   65605 cri.go:89] found id: ""
	I0723 15:23:50.983326   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.983334   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:50.983339   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:50.983392   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:51.019584   65605 cri.go:89] found id: ""
	I0723 15:23:51.019614   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.019624   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:51.019631   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:51.019693   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:51.050981   65605 cri.go:89] found id: ""
	I0723 15:23:51.051005   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.051014   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:51.051023   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:51.051038   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:51.088826   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:51.088852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:51.141369   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:51.141401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:51.155419   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:51.155450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:51.222640   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:51.222662   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:51.222675   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:49.133154   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.632559   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:48.905876   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.404543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.654814   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:54.153611   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.802706   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:53.815926   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:53.815985   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:53.847867   65605 cri.go:89] found id: ""
	I0723 15:23:53.847900   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.847913   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:53.847921   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:53.847981   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:53.881461   65605 cri.go:89] found id: ""
	I0723 15:23:53.881489   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.881499   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:53.881506   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:53.881569   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:53.921025   65605 cri.go:89] found id: ""
	I0723 15:23:53.921059   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.921070   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:53.921076   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:53.921135   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:53.955219   65605 cri.go:89] found id: ""
	I0723 15:23:53.955242   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.955250   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:53.955255   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:53.955318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:53.991874   65605 cri.go:89] found id: ""
	I0723 15:23:53.991905   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.991915   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:53.991922   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:53.991986   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:54.024702   65605 cri.go:89] found id: ""
	I0723 15:23:54.024735   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.024745   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:54.024752   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:54.024819   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:54.063778   65605 cri.go:89] found id: ""
	I0723 15:23:54.063801   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.063808   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:54.063813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:54.063861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:54.098194   65605 cri.go:89] found id: ""
	I0723 15:23:54.098222   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.098232   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:54.098244   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:54.098258   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:54.148576   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:54.148617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:54.162561   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:54.162596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:54.236614   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:54.236647   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:54.236663   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:54.315900   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:54.315932   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:53.632910   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.633683   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.404873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.904545   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:57.904874   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.153719   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:58.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.853674   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:56.867190   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:56.867270   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:56.901757   65605 cri.go:89] found id: ""
	I0723 15:23:56.901782   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.901792   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:56.901799   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:56.901858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:56.943877   65605 cri.go:89] found id: ""
	I0723 15:23:56.943909   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.943920   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:56.943926   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:56.943983   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:56.977156   65605 cri.go:89] found id: ""
	I0723 15:23:56.977186   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.977194   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:56.977200   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:56.977260   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:57.009251   65605 cri.go:89] found id: ""
	I0723 15:23:57.009280   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.009290   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:57.009297   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:57.009362   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:57.041196   65605 cri.go:89] found id: ""
	I0723 15:23:57.041225   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.041236   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:57.041243   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:57.041295   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:57.081725   65605 cri.go:89] found id: ""
	I0723 15:23:57.081752   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.081760   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:57.081765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:57.081810   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:57.114457   65605 cri.go:89] found id: ""
	I0723 15:23:57.114482   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.114490   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:57.114495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:57.114551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:57.149775   65605 cri.go:89] found id: ""
	I0723 15:23:57.149803   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.149814   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:57.149824   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:57.149838   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:57.197984   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:57.198014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:57.210717   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:57.210743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:57.271374   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:57.271392   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:57.271403   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:57.346151   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:57.346185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:59.882368   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:59.895184   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:59.895257   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:59.928859   65605 cri.go:89] found id: ""
	I0723 15:23:59.928891   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.928902   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:59.928909   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:59.928967   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:59.962441   65605 cri.go:89] found id: ""
	I0723 15:23:59.962472   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.962483   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:59.962491   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:59.962570   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:59.996637   65605 cri.go:89] found id: ""
	I0723 15:23:59.996659   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.996667   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:59.996672   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:59.996720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:00.029291   65605 cri.go:89] found id: ""
	I0723 15:24:00.029320   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.029330   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:00.029338   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:00.029387   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:00.060869   65605 cri.go:89] found id: ""
	I0723 15:24:00.060898   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.060907   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:00.060912   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:00.060993   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:00.092010   65605 cri.go:89] found id: ""
	I0723 15:24:00.092042   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.092054   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:00.092063   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:00.092128   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:00.124914   65605 cri.go:89] found id: ""
	I0723 15:24:00.124940   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.124949   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:00.124955   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:00.125016   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:00.159927   65605 cri.go:89] found id: ""
	I0723 15:24:00.159953   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.159962   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:00.159977   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:00.159993   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:00.209719   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:00.209764   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:00.224757   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:00.224784   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:00.292079   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:00.292100   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:00.292113   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:00.377382   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:00.377415   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:58.132374   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.133083   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:59.906087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.404839   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.655745   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.658870   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.153217   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.916818   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:02.931524   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:02.931594   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:02.966440   65605 cri.go:89] found id: ""
	I0723 15:24:02.966462   65605 logs.go:276] 0 containers: []
	W0723 15:24:02.966470   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:02.966475   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:02.966525   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:03.000833   65605 cri.go:89] found id: ""
	I0723 15:24:03.000857   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.000865   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:03.000870   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:03.000918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:03.035531   65605 cri.go:89] found id: ""
	I0723 15:24:03.035559   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.035570   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:03.035577   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:03.035636   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:03.068376   65605 cri.go:89] found id: ""
	I0723 15:24:03.068401   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.068411   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:03.068418   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:03.068479   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:03.102499   65605 cri.go:89] found id: ""
	I0723 15:24:03.102532   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.102543   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:03.102549   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:03.102600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:03.137173   65605 cri.go:89] found id: ""
	I0723 15:24:03.137198   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.137207   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:03.137215   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:03.137259   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:03.170652   65605 cri.go:89] found id: ""
	I0723 15:24:03.170677   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.170685   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:03.170690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:03.170748   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:03.204828   65605 cri.go:89] found id: ""
	I0723 15:24:03.204855   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.204864   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:03.204875   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:03.204895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:03.287370   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:03.287413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:03.323855   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:03.323888   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:03.379809   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:03.379846   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:03.392944   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:03.392971   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:03.465681   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:05.966635   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:05.979888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:05.979949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:06.013706   65605 cri.go:89] found id: ""
	I0723 15:24:06.013733   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.013740   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:06.013746   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:06.013794   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:06.046584   65605 cri.go:89] found id: ""
	I0723 15:24:06.046612   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.046622   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:06.046630   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:06.046690   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:06.077379   65605 cri.go:89] found id: ""
	I0723 15:24:06.077407   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.077416   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:06.077422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:06.077488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:06.108946   65605 cri.go:89] found id: ""
	I0723 15:24:06.108975   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.108986   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:06.108993   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:06.109058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:06.143082   65605 cri.go:89] found id: ""
	I0723 15:24:06.143115   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.143123   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:06.143129   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:06.143178   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:06.182735   65605 cri.go:89] found id: ""
	I0723 15:24:06.182762   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.182772   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:06.182779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:06.182839   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:06.217613   65605 cri.go:89] found id: ""
	I0723 15:24:06.217640   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.217650   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:06.217657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:06.217720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:06.252739   65605 cri.go:89] found id: ""
	I0723 15:24:06.252775   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.252787   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:06.252800   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:06.252814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:06.304325   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:06.304358   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:06.317426   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:06.317450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:06.384284   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:06.384313   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:06.384329   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:06.460936   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:06.460974   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:02.632839   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.132547   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:04.404942   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:06.406131   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:07.153476   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.154627   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.000304   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:09.013544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:09.013618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:09.046414   65605 cri.go:89] found id: ""
	I0723 15:24:09.046442   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.046452   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:09.046459   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:09.046522   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:09.083183   65605 cri.go:89] found id: ""
	I0723 15:24:09.083214   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.083225   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:09.083231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:09.083292   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:09.117524   65605 cri.go:89] found id: ""
	I0723 15:24:09.117568   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.117578   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:09.117585   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:09.117647   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:09.152624   65605 cri.go:89] found id: ""
	I0723 15:24:09.152652   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.152667   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:09.152674   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:09.152735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:09.186918   65605 cri.go:89] found id: ""
	I0723 15:24:09.186943   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.186951   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:09.186957   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:09.187017   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:09.219857   65605 cri.go:89] found id: ""
	I0723 15:24:09.219889   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.219909   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:09.219917   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:09.219980   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:09.253364   65605 cri.go:89] found id: ""
	I0723 15:24:09.253392   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.253402   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:09.253409   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:09.253469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:09.285049   65605 cri.go:89] found id: ""
	I0723 15:24:09.285072   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.285079   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:09.285088   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:09.285099   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:09.336011   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:09.336046   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:09.349643   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:09.349672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:09.428156   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:09.428181   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:09.428200   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:09.513917   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:09.513977   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:07.632840   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.636373   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:08.904674   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.405130   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.653749   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.153549   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:12.053554   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:12.067177   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:12.067242   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:12.097265   65605 cri.go:89] found id: ""
	I0723 15:24:12.097289   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.097298   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:12.097305   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:12.097378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:12.129832   65605 cri.go:89] found id: ""
	I0723 15:24:12.129858   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.129868   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:12.129876   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:12.129938   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:12.164173   65605 cri.go:89] found id: ""
	I0723 15:24:12.164202   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.164213   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:12.164221   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:12.164275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:12.196604   65605 cri.go:89] found id: ""
	I0723 15:24:12.196637   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.196648   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:12.196655   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:12.196725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:12.239120   65605 cri.go:89] found id: ""
	I0723 15:24:12.239149   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.239158   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:12.239164   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:12.239232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:12.273806   65605 cri.go:89] found id: ""
	I0723 15:24:12.273836   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.273847   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:12.273855   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:12.273908   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:12.305937   65605 cri.go:89] found id: ""
	I0723 15:24:12.305965   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.305976   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:12.305984   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:12.306045   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:12.337795   65605 cri.go:89] found id: ""
	I0723 15:24:12.337822   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.337830   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:12.337839   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:12.337850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:12.390476   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:12.390512   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:12.405397   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:12.405422   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:12.474687   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:12.474711   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:12.474730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:12.551302   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:12.551341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:15.094530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:15.108194   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:15.108267   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:15.141068   65605 cri.go:89] found id: ""
	I0723 15:24:15.141095   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.141103   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:15.141109   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:15.141167   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:15.176226   65605 cri.go:89] found id: ""
	I0723 15:24:15.176260   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.176276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:15.176284   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:15.176348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:15.209086   65605 cri.go:89] found id: ""
	I0723 15:24:15.209115   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.209123   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:15.209128   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:15.209175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:15.245808   65605 cri.go:89] found id: ""
	I0723 15:24:15.245842   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.245853   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:15.245863   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:15.245926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:15.277680   65605 cri.go:89] found id: ""
	I0723 15:24:15.277710   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.277720   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:15.277728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:15.277789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:15.308419   65605 cri.go:89] found id: ""
	I0723 15:24:15.308443   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.308450   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:15.308456   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:15.308515   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:15.340785   65605 cri.go:89] found id: ""
	I0723 15:24:15.340812   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.340820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:15.340825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:15.340871   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:15.376014   65605 cri.go:89] found id: ""
	I0723 15:24:15.376040   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.376050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:15.376061   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:15.376074   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:15.427672   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:15.427706   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:15.441726   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:15.441755   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:15.508628   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:15.508659   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:15.508674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:15.589246   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:15.589284   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:12.133283   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.632399   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:13.905548   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.405913   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.652810   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.653725   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.128036   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:18.141529   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:18.141604   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:18.176401   65605 cri.go:89] found id: ""
	I0723 15:24:18.176434   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.176446   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:18.176453   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:18.176507   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:18.209833   65605 cri.go:89] found id: ""
	I0723 15:24:18.209868   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.209878   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:18.209886   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:18.209949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:18.243094   65605 cri.go:89] found id: ""
	I0723 15:24:18.243129   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.243139   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:18.243146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:18.243211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:18.275929   65605 cri.go:89] found id: ""
	I0723 15:24:18.275957   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.275968   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:18.275980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:18.276037   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:18.309064   65605 cri.go:89] found id: ""
	I0723 15:24:18.309095   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.309103   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:18.309109   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:18.309171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:18.345446   65605 cri.go:89] found id: ""
	I0723 15:24:18.345475   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.345485   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:18.345491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:18.345552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:18.381774   65605 cri.go:89] found id: ""
	I0723 15:24:18.381808   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.381820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:18.381827   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:18.381881   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:18.435663   65605 cri.go:89] found id: ""
	I0723 15:24:18.435692   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.435706   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:18.435716   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:18.435729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.471152   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:18.471184   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:18.523114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:18.523146   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:18.536555   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:18.536594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:18.607773   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:18.607792   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:18.607803   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.192781   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:21.205337   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:21.205403   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:21.242125   65605 cri.go:89] found id: ""
	I0723 15:24:21.242155   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.242163   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:21.242170   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:21.242243   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:21.279245   65605 cri.go:89] found id: ""
	I0723 15:24:21.279274   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.279286   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:21.279295   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:21.279361   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:21.311316   65605 cri.go:89] found id: ""
	I0723 15:24:21.311340   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.311348   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:21.311355   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:21.311415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:21.344444   65605 cri.go:89] found id: ""
	I0723 15:24:21.344468   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.344478   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:21.344485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:21.344545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:21.381055   65605 cri.go:89] found id: ""
	I0723 15:24:21.381082   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.381092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:21.381099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:21.381158   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:21.416593   65605 cri.go:89] found id: ""
	I0723 15:24:21.416621   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.416633   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:21.416643   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:21.416706   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:21.448345   65605 cri.go:89] found id: ""
	I0723 15:24:21.448368   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.448377   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:21.448382   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:21.448426   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:21.481810   65605 cri.go:89] found id: ""
	I0723 15:24:21.481836   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.481843   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:21.481852   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:21.481874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:21.545200   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:21.545227   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:21.545244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.626037   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:21.626073   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:21.667961   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:21.667998   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:21.718622   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:21.718662   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:17.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:19.632774   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.632954   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.905257   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:20.906323   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.153330   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.153495   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:24.233086   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:24.247111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:24.247175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:24.281818   65605 cri.go:89] found id: ""
	I0723 15:24:24.281850   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.281861   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:24.281868   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:24.281924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:24.315621   65605 cri.go:89] found id: ""
	I0723 15:24:24.315647   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.315656   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:24.315664   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:24.315722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:24.350355   65605 cri.go:89] found id: ""
	I0723 15:24:24.350400   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.350410   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:24.350417   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:24.350498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:24.384584   65605 cri.go:89] found id: ""
	I0723 15:24:24.384611   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.384619   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:24.384625   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:24.384671   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:24.423669   65605 cri.go:89] found id: ""
	I0723 15:24:24.423694   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.423701   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:24.423707   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:24.423754   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:24.456572   65605 cri.go:89] found id: ""
	I0723 15:24:24.456599   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.456606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:24.456611   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:24.456659   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:24.488024   65605 cri.go:89] found id: ""
	I0723 15:24:24.488047   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.488055   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:24.488061   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:24.488109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:24.519311   65605 cri.go:89] found id: ""
	I0723 15:24:24.519344   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.519352   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:24.519360   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:24.519371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:24.568552   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:24.568594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.581845   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:24.581874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:24.650455   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:24.650478   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:24.650492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:24.728143   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:24.728179   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:23.633012   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:26.132417   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.405046   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.906015   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.654555   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.152778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.268112   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:27.281947   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:27.282025   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:27.315489   65605 cri.go:89] found id: ""
	I0723 15:24:27.315517   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.315528   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:27.315536   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:27.315599   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:27.348481   65605 cri.go:89] found id: ""
	I0723 15:24:27.348509   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.348519   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:27.348526   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:27.348580   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:27.380628   65605 cri.go:89] found id: ""
	I0723 15:24:27.380659   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.380668   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:27.380673   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:27.380731   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:27.413647   65605 cri.go:89] found id: ""
	I0723 15:24:27.413679   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.413688   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:27.413693   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:27.413744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:27.450398   65605 cri.go:89] found id: ""
	I0723 15:24:27.450425   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.450436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:27.450442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:27.450494   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:27.489071   65605 cri.go:89] found id: ""
	I0723 15:24:27.489101   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.489117   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:27.489125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:27.489190   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:27.529785   65605 cri.go:89] found id: ""
	I0723 15:24:27.529813   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.529823   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:27.529829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:27.529876   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:27.560811   65605 cri.go:89] found id: ""
	I0723 15:24:27.560843   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.560855   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:27.560866   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:27.560882   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:27.574078   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:27.574100   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:27.636153   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:27.636179   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:27.636194   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:27.714001   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:27.714041   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.751396   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:27.751428   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.307581   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:30.319762   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:30.319823   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:30.354317   65605 cri.go:89] found id: ""
	I0723 15:24:30.354341   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.354349   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:30.354355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:30.354429   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:30.389994   65605 cri.go:89] found id: ""
	I0723 15:24:30.390026   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.390039   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:30.390048   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:30.390122   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:30.428854   65605 cri.go:89] found id: ""
	I0723 15:24:30.428878   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.428887   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:30.428893   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:30.428966   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:30.461727   65605 cri.go:89] found id: ""
	I0723 15:24:30.461752   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.461759   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:30.461765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:30.461813   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:30.494777   65605 cri.go:89] found id: ""
	I0723 15:24:30.494799   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.494807   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:30.494813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:30.494858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:30.531918   65605 cri.go:89] found id: ""
	I0723 15:24:30.531943   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.531954   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:30.531960   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:30.532034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:30.590683   65605 cri.go:89] found id: ""
	I0723 15:24:30.590710   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.590720   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:30.590727   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:30.590772   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:30.636073   65605 cri.go:89] found id: ""
	I0723 15:24:30.636104   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.636114   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:30.636124   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:30.636138   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.686233   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:30.686268   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:30.700266   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:30.700308   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:30.773850   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:30.773868   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:30.773879   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:30.854428   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:30.854464   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:28.633061   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.633604   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:28.404488   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.406038   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.905405   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.653390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.153739   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:33.393374   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:33.406722   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:33.406779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:33.440555   65605 cri.go:89] found id: ""
	I0723 15:24:33.440585   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.440596   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:33.440604   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:33.440666   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:33.473363   65605 cri.go:89] found id: ""
	I0723 15:24:33.473389   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.473398   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:33.473405   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:33.473469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:33.509772   65605 cri.go:89] found id: ""
	I0723 15:24:33.509805   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.509816   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:33.509829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:33.509896   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:33.546578   65605 cri.go:89] found id: ""
	I0723 15:24:33.546605   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.546613   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:33.546618   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:33.546686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:33.582735   65605 cri.go:89] found id: ""
	I0723 15:24:33.582759   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.582766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:33.582771   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:33.582831   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:33.619013   65605 cri.go:89] found id: ""
	I0723 15:24:33.619039   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.619048   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:33.619053   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:33.619110   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:33.655967   65605 cri.go:89] found id: ""
	I0723 15:24:33.655988   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.655995   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:33.656001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:33.656058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:33.694266   65605 cri.go:89] found id: ""
	I0723 15:24:33.694303   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.694311   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:33.694319   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:33.694330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:33.744464   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:33.744504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:33.759314   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:33.759342   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:33.832308   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:33.832331   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:33.832364   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:33.910820   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:33.910860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.452804   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:36.465137   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:36.465224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:36.504340   65605 cri.go:89] found id: ""
	I0723 15:24:36.504371   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.504380   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:36.504385   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:36.504436   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:36.539113   65605 cri.go:89] found id: ""
	I0723 15:24:36.539138   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.539147   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:36.539154   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:36.539215   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:36.572443   65605 cri.go:89] found id: ""
	I0723 15:24:36.572468   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.572478   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:36.572485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:36.572540   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:36.605366   65605 cri.go:89] found id: ""
	I0723 15:24:36.605391   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.605398   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:36.605404   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:36.605467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:36.637467   65605 cri.go:89] found id: ""
	I0723 15:24:36.637496   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.637506   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:36.637513   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:36.637576   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:36.674630   65605 cri.go:89] found id: ""
	I0723 15:24:36.674652   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.674661   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:36.674669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:36.674722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:36.707409   65605 cri.go:89] found id: ""
	I0723 15:24:36.707500   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.707511   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:36.707525   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:36.707581   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:36.742746   65605 cri.go:89] found id: ""
	I0723 15:24:36.742771   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.742778   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:36.742786   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:36.742800   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.776474   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:36.776498   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:36.826256   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:36.826289   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:36.839568   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:36.839596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:24:33.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.632486   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.405071   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.406177   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.653785   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.654028   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	W0723 15:24:36.906055   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:36.906082   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:36.906095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:39.483791   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:39.496085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:39.496150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:39.527545   65605 cri.go:89] found id: ""
	I0723 15:24:39.527573   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.527583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:39.527590   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:39.527653   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:39.562024   65605 cri.go:89] found id: ""
	I0723 15:24:39.562051   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.562060   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:39.562066   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:39.562115   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:39.600294   65605 cri.go:89] found id: ""
	I0723 15:24:39.600317   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.600324   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:39.600329   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:39.600378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:39.635629   65605 cri.go:89] found id: ""
	I0723 15:24:39.635653   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.635663   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:39.635669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:39.635729   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:39.672815   65605 cri.go:89] found id: ""
	I0723 15:24:39.672843   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.672854   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:39.672861   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:39.672924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:39.705965   65605 cri.go:89] found id: ""
	I0723 15:24:39.705999   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.706009   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:39.706023   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:39.706077   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:39.739262   65605 cri.go:89] found id: ""
	I0723 15:24:39.739288   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.739298   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:39.739304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:39.739373   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:39.771786   65605 cri.go:89] found id: ""
	I0723 15:24:39.771811   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.771820   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:39.771831   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:39.771844   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:39.813596   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:39.813628   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:39.861596   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:39.861629   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:39.875843   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:39.875867   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:39.947917   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:39.947941   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:39.947958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:38.135033   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:40.633462   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.906043   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.404845   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.153505   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:44.154094   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.530636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:42.543636   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:42.543718   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:42.576613   65605 cri.go:89] found id: ""
	I0723 15:24:42.576642   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.576652   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:42.576659   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:42.576723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:42.611422   65605 cri.go:89] found id: ""
	I0723 15:24:42.611452   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.611460   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:42.611465   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:42.611514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:42.647346   65605 cri.go:89] found id: ""
	I0723 15:24:42.647370   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.647380   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:42.647386   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:42.647447   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:42.683587   65605 cri.go:89] found id: ""
	I0723 15:24:42.683614   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.683622   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:42.683627   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:42.683673   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:42.715688   65605 cri.go:89] found id: ""
	I0723 15:24:42.715709   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.715717   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:42.715723   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:42.715775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:42.749589   65605 cri.go:89] found id: ""
	I0723 15:24:42.749624   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.749632   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:42.749637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:42.749684   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:42.786668   65605 cri.go:89] found id: ""
	I0723 15:24:42.786694   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.786702   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:42.786708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:42.786757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:42.821541   65605 cri.go:89] found id: ""
	I0723 15:24:42.821574   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.821585   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:42.821597   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:42.821612   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:42.873689   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:42.873720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:42.886689   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:42.886719   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:42.958057   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:42.958078   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:42.958093   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:43.042738   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:43.042771   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:45.580764   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:45.593331   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:45.593402   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:45.632356   65605 cri.go:89] found id: ""
	I0723 15:24:45.632386   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.632397   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:45.632404   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:45.632460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:45.674319   65605 cri.go:89] found id: ""
	I0723 15:24:45.674353   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.674363   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:45.674371   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:45.674450   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:45.718577   65605 cri.go:89] found id: ""
	I0723 15:24:45.718608   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.718616   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:45.718622   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:45.718686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:45.758866   65605 cri.go:89] found id: ""
	I0723 15:24:45.758894   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.758901   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:45.758907   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:45.758954   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:45.795098   65605 cri.go:89] found id: ""
	I0723 15:24:45.795124   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.795134   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:45.795148   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:45.795224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:45.832205   65605 cri.go:89] found id: ""
	I0723 15:24:45.832236   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.832257   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:45.832266   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:45.832348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:45.867679   65605 cri.go:89] found id: ""
	I0723 15:24:45.867713   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.867725   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:45.867733   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:45.867799   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:45.904960   65605 cri.go:89] found id: ""
	I0723 15:24:45.904999   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.905010   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:45.905022   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:45.905036   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:45.962373   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:45.962434   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:45.978670   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:45.978715   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:46.050765   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:46.050795   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:46.050811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:46.145347   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:46.145387   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:43.132518   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:45.133735   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:43.399717   65177 pod_ready.go:81] duration metric: took 4m0.000898156s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	E0723 15:24:43.399747   65177 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0723 15:24:43.399766   65177 pod_ready.go:38] duration metric: took 4m8.000231971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:24:43.399796   65177 kubeadm.go:597] duration metric: took 4m15.901150134s to restartPrimaryControlPlane
	W0723 15:24:43.399891   65177 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:43.399930   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:46.154147   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.653381   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.691420   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:48.704605   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:48.704662   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:48.736998   65605 cri.go:89] found id: ""
	I0723 15:24:48.737030   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.737040   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:48.737048   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:48.737116   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:48.770428   65605 cri.go:89] found id: ""
	I0723 15:24:48.770456   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.770466   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:48.770474   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:48.770534   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:48.804036   65605 cri.go:89] found id: ""
	I0723 15:24:48.804063   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.804073   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:48.804080   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:48.804140   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:48.841221   65605 cri.go:89] found id: ""
	I0723 15:24:48.841247   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.841256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:48.841263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:48.841345   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:48.877239   65605 cri.go:89] found id: ""
	I0723 15:24:48.877269   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.877280   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:48.877288   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:48.877348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:48.910120   65605 cri.go:89] found id: ""
	I0723 15:24:48.910144   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.910153   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:48.910161   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:48.910222   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:48.944831   65605 cri.go:89] found id: ""
	I0723 15:24:48.944861   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.944872   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:48.944881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:48.944936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:48.978782   65605 cri.go:89] found id: ""
	I0723 15:24:48.978811   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.978821   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:48.978832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:48.978850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:49.031863   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:49.031900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:49.045173   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:49.045196   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:49.115607   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:49.115632   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:49.115644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:49.195137   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:49.195186   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:51.732915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:51.746885   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:51.746970   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:51.787857   65605 cri.go:89] found id: ""
	I0723 15:24:51.787878   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.787885   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:51.787890   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:51.787933   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:51.826515   65605 cri.go:89] found id: ""
	I0723 15:24:51.826537   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.826545   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:51.826550   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:51.826611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:47.634980   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:50.132905   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.153224   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:53.153400   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.863825   65605 cri.go:89] found id: ""
	I0723 15:24:51.863867   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.863878   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:51.863884   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:51.863936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:51.901367   65605 cri.go:89] found id: ""
	I0723 15:24:51.901403   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.901414   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:51.901422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:51.901474   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:51.933270   65605 cri.go:89] found id: ""
	I0723 15:24:51.933303   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.933314   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:51.933321   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:51.933385   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:51.965174   65605 cri.go:89] found id: ""
	I0723 15:24:51.965205   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.965217   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:51.965227   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:51.965296   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:51.999785   65605 cri.go:89] found id: ""
	I0723 15:24:51.999812   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.999822   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:51.999841   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:51.999914   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:52.035592   65605 cri.go:89] found id: ""
	I0723 15:24:52.035619   65605 logs.go:276] 0 containers: []
	W0723 15:24:52.035630   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:52.035641   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:52.035656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:52.048683   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:52.048711   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:52.112319   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:52.112338   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:52.112351   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:52.196596   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:52.196632   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:52.235608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:52.235635   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:54.786414   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:54.799864   65605 kubeadm.go:597] duration metric: took 4m4.703331486s to restartPrimaryControlPlane
	W0723 15:24:54.799946   65605 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:54.799996   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:52.134857   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:54.633070   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:55.653385   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.154569   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.675405   65605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.875388525s)
	I0723 15:24:58.675461   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:24:58.689878   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:24:58.699568   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:24:58.708541   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:24:58.708559   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:24:58.708604   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:24:58.717055   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:24:58.717108   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:24:58.725736   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:24:58.734127   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:24:58.734227   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:24:58.742862   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.750696   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:24:58.750747   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.759235   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:24:58.768036   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:24:58.768094   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:24:58.777299   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:24:58.976177   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:24:57.133412   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:59.633162   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:00.652486   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.653128   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.654556   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.132762   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.134714   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:06.632391   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:07.152861   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:09.153443   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:08.633329   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.133963   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.652964   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:13.653225   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:14.921745   65177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.521789017s)
	I0723 15:25:14.921814   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:14.937627   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:25:14.948238   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:25:14.958145   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:25:14.958171   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:25:14.958223   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:25:14.967224   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:25:14.967282   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:25:14.975995   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:25:14.984981   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:25:14.985040   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:25:14.993733   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.002214   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:25:15.002265   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.012952   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:25:15.022716   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:25:15.022775   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:25:15.032954   65177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:25:15.081347   65177 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 15:25:15.081412   65177 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:25:15.217189   65177 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:25:15.217316   65177 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:25:15.217421   65177 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:25:15.414012   65177 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:25:15.415975   65177 out.go:204]   - Generating certificates and keys ...
	I0723 15:25:15.416086   65177 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:25:15.416172   65177 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:25:15.416284   65177 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:25:15.416378   65177 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:25:15.416512   65177 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:25:15.416600   65177 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:25:15.416690   65177 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:25:15.416781   65177 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:25:15.416901   65177 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:25:15.417027   65177 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:25:15.417091   65177 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:25:15.417169   65177 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:25:15.577526   65177 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:25:15.771865   65177 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 15:25:15.968841   65177 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:25:16.376626   65177 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:25:16.569425   65177 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:25:16.570004   65177 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:25:16.572623   65177 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:25:13.633779   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.133051   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.574399   65177 out.go:204]   - Booting up control plane ...
	I0723 15:25:16.574516   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:25:16.574622   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:25:16.575046   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:25:16.594177   65177 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:25:16.595205   65177 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:25:16.595310   65177 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:25:16.739893   65177 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 15:25:16.740022   65177 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 15:25:17.242030   65177 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.858581ms
	I0723 15:25:17.242119   65177 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 15:25:15.653757   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.153924   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:20.154226   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.634047   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:21.132773   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:22.244539   65177 kubeadm.go:310] [api-check] The API server is healthy after 5.002291296s
	I0723 15:25:22.260367   65177 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 15:25:22.272659   65177 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 15:25:22.304686   65177 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 15:25:22.304939   65177 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-486436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 15:25:22.318299   65177 kubeadm.go:310] [bootstrap-token] Using token: 1476j9.4ihrwdjbg4aq5odf
	I0723 15:25:22.319736   65177 out.go:204]   - Configuring RBAC rules ...
	I0723 15:25:22.319899   65177 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 15:25:22.329081   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 15:25:22.340687   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 15:25:22.344962   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 15:25:22.348526   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 15:25:22.355955   65177 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 15:25:22.652467   65177 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 15:25:23.122105   65177 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 15:25:23.653074   65177 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 15:25:23.654335   65177 kubeadm.go:310] 
	I0723 15:25:23.654448   65177 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 15:25:23.654461   65177 kubeadm.go:310] 
	I0723 15:25:23.654580   65177 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 15:25:23.654599   65177 kubeadm.go:310] 
	I0723 15:25:23.654648   65177 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 15:25:23.654721   65177 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 15:25:23.654796   65177 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 15:25:23.654821   65177 kubeadm.go:310] 
	I0723 15:25:23.654902   65177 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 15:25:23.654925   65177 kubeadm.go:310] 
	I0723 15:25:23.655000   65177 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 15:25:23.655010   65177 kubeadm.go:310] 
	I0723 15:25:23.655076   65177 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 15:25:23.655174   65177 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 15:25:23.655256   65177 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 15:25:23.655264   65177 kubeadm.go:310] 
	I0723 15:25:23.655352   65177 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 15:25:23.655440   65177 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 15:25:23.655459   65177 kubeadm.go:310] 
	I0723 15:25:23.655579   65177 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.655719   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 15:25:23.655752   65177 kubeadm.go:310] 	--control-plane 
	I0723 15:25:23.655771   65177 kubeadm.go:310] 
	I0723 15:25:23.655896   65177 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 15:25:23.655904   65177 kubeadm.go:310] 
	I0723 15:25:23.656005   65177 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.656141   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 15:25:23.656644   65177 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:25:23.656674   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:25:23.656686   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:25:23.659688   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:25:22.653874   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:24.654172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.133652   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:25.633189   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.660997   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:25:23.671788   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:25:23.692109   65177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:25:23.692195   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:23.692199   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-486436 minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=embed-certs-486436 minikube.k8s.io/primary=true
	I0723 15:25:23.716101   65177 ops.go:34] apiserver oom_adj: -16
	I0723 15:25:23.905952   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.405980   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.906787   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.406096   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.906365   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.406501   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.906068   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.406018   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.907033   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.153085   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.653377   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:27.633816   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.133531   66641 pod_ready.go:81] duration metric: took 4m0.007080073s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:29.133554   66641 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:29.133561   66641 pod_ready.go:38] duration metric: took 4m4.545428088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:29.133577   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:29.133601   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:29.133646   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:29.179796   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:29.179818   66641 cri.go:89] found id: ""
	I0723 15:25:29.179830   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:29.179882   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.184024   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:29.184095   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:29.219711   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:29.219740   66641 cri.go:89] found id: ""
	I0723 15:25:29.219749   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:29.219814   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.223687   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:29.223761   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:29.258473   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:29.258498   66641 cri.go:89] found id: ""
	I0723 15:25:29.258508   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:29.258556   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.262789   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:29.262857   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:29.304206   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.304233   66641 cri.go:89] found id: ""
	I0723 15:25:29.304242   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:29.304306   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.309658   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:29.309735   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:29.361664   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:29.361690   66641 cri.go:89] found id: ""
	I0723 15:25:29.361699   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:29.361758   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.366171   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:29.366248   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:29.414069   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:29.414094   66641 cri.go:89] found id: ""
	I0723 15:25:29.414104   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:29.414162   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.419607   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:29.419678   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:29.464533   66641 cri.go:89] found id: ""
	I0723 15:25:29.464563   66641 logs.go:276] 0 containers: []
	W0723 15:25:29.464573   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:29.464580   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:29.464640   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:29.499966   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:29.499991   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:29.499996   66641 cri.go:89] found id: ""
	I0723 15:25:29.500006   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:29.500063   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.503961   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.508088   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:29.508109   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:29.653373   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:29.653403   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.694171   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:29.694205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:30.262503   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:30.262559   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:30.304038   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:30.304070   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:30.357964   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:30.358013   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:30.372263   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:30.372296   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:30.418543   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:30.418583   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:30.470018   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:30.470050   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:30.503538   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:30.503579   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:30.538515   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:30.538554   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:30.599104   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:30.599137   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:30.635841   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:30.635867   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:28.406535   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:28.906729   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.406804   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.906364   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.406245   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.906646   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.406143   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.906645   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.406411   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.906643   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.653490   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.654773   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.406893   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:33.906016   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.406827   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.906668   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.406337   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.906162   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.406864   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.502155   65177 kubeadm.go:1113] duration metric: took 12.810025657s to wait for elevateKubeSystemPrivileges
	I0723 15:25:36.502200   65177 kubeadm.go:394] duration metric: took 5m9.050239878s to StartCluster
	I0723 15:25:36.502225   65177 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.502332   65177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:25:36.504959   65177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.505284   65177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:25:36.505373   65177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:25:36.505452   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:25:36.505461   65177 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-486436"
	I0723 15:25:36.505486   65177 addons.go:69] Setting metrics-server=true in profile "embed-certs-486436"
	I0723 15:25:36.505494   65177 addons.go:69] Setting default-storageclass=true in profile "embed-certs-486436"
	I0723 15:25:36.505509   65177 addons.go:234] Setting addon metrics-server=true in "embed-certs-486436"
	W0723 15:25:36.505518   65177 addons.go:243] addon metrics-server should already be in state true
	I0723 15:25:36.505535   65177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-486436"
	I0723 15:25:36.505541   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505487   65177 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-486436"
	W0723 15:25:36.505635   65177 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:25:36.505652   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505919   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505938   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505950   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505959   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.505987   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.506050   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.507034   65177 out.go:177] * Verifying Kubernetes components...
	I0723 15:25:36.508493   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:25:36.521500   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0723 15:25:36.521508   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0723 15:25:36.521836   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0723 15:25:36.522060   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522168   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522198   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522626   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522674   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522696   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522710   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522713   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522724   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.523009   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523043   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523309   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523454   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.523518   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523542   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.523629   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523665   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.527348   65177 addons.go:234] Setting addon default-storageclass=true in "embed-certs-486436"
	W0723 15:25:36.527370   65177 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:25:36.527399   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.527752   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.527784   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.540037   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0723 15:25:36.540208   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0723 15:25:36.540572   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.540689   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.541105   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541113   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541122   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541123   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541455   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541454   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541618   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.541686   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.543525   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.543999   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.545455   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0723 15:25:36.545800   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.545846   65177 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:25:36.545906   65177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:25:33.172857   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:33.188951   66641 api_server.go:72] duration metric: took 4m16.32591009s to wait for apiserver process to appear ...
	I0723 15:25:33.188979   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:33.189022   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:33.189077   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:33.228175   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.228204   66641 cri.go:89] found id: ""
	I0723 15:25:33.228213   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:33.228271   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.232451   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:33.232518   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:33.268343   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.268362   66641 cri.go:89] found id: ""
	I0723 15:25:33.268371   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:33.268426   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.272333   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:33.272388   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:33.305913   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.305936   66641 cri.go:89] found id: ""
	I0723 15:25:33.305945   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:33.305998   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.310500   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:33.310573   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:33.345773   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.345798   66641 cri.go:89] found id: ""
	I0723 15:25:33.345807   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:33.345872   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.350031   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:33.350084   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:33.383305   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.383331   66641 cri.go:89] found id: ""
	I0723 15:25:33.383341   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:33.383399   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.387279   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:33.387331   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:33.428442   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.428468   66641 cri.go:89] found id: ""
	I0723 15:25:33.428478   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:33.428676   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.432814   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:33.432879   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:33.469064   66641 cri.go:89] found id: ""
	I0723 15:25:33.469093   66641 logs.go:276] 0 containers: []
	W0723 15:25:33.469105   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:33.469112   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:33.469164   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:33.509131   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:33.509161   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.509168   66641 cri.go:89] found id: ""
	I0723 15:25:33.509177   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:33.509240   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.513478   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.517125   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:33.517152   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.554974   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:33.555004   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.606042   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:33.606074   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.648068   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:33.648100   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:33.698660   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:33.698690   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:33.797480   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:33.797508   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:33.812119   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:33.812146   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.863628   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:33.863661   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.913667   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:33.913695   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.949115   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:33.949144   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.988180   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:33.988205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:34.023679   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:34.023705   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:34.481829   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:34.481886   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:36.546218   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.546238   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.546607   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.547165   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.547209   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.547534   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:25:36.547548   65177 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:25:36.547565   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.547735   65177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.547752   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:25:36.547771   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.551130   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551764   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551767   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.551800   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551844   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551871   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.552160   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.552187   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552429   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552608   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552606   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.552797   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.567445   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0723 15:25:36.567912   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.568411   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.568432   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.568752   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.568949   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.570216   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.570524   65177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.570580   65177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:25:36.570620   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.572949   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573375   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.573402   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573509   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.573658   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.573787   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.573918   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.722640   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:25:36.756372   65177 node_ready.go:35] waiting up to 6m0s for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.779995   65177 node_ready.go:49] node "embed-certs-486436" has status "Ready":"True"
	I0723 15:25:36.780025   65177 node_ready.go:38] duration metric: took 23.62289ms for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.780039   65177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:36.807738   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.810749   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:36.820589   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:25:36.820613   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:25:36.880548   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:25:36.880581   65177 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:25:36.961807   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.962203   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:36.962229   65177 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:25:37.055123   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:37.148724   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.148749   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149038   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149096   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.149114   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.149123   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149412   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149432   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161152   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.161173   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.161477   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.161496   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161496   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.119897   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158050831s)
	I0723 15:25:38.120002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120022   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120358   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.120383   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.120399   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120361   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122012   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122234   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.122252   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.401938   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.346767402s)
	I0723 15:25:38.402002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402019   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402366   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402391   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402401   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402409   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402725   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.402738   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402762   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402773   65177 addons.go:475] Verifying addon metrics-server=true in "embed-certs-486436"
	I0723 15:25:38.404515   65177 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0723 15:25:36.154127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.155104   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.405850   65177 addons.go:510] duration metric: took 1.90047622s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0723 15:25:38.816969   65177 pod_ready.go:102] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:39.316609   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.316632   65177 pod_ready.go:81] duration metric: took 2.505858486s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.316642   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327865   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.327890   65177 pod_ready.go:81] duration metric: took 11.242778ms for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327900   65177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332886   65177 pod_ready.go:92] pod "etcd-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.332914   65177 pod_ready.go:81] duration metric: took 5.006846ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332925   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337166   65177 pod_ready.go:92] pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.337183   65177 pod_ready.go:81] duration metric: took 4.252609ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337198   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341748   65177 pod_ready.go:92] pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.341762   65177 pod_ready.go:81] duration metric: took 4.559215ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341771   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714214   65177 pod_ready.go:92] pod "kube-proxy-wzh4d" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.714237   65177 pod_ready.go:81] duration metric: took 372.459367ms for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714247   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114721   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:40.114744   65177 pod_ready.go:81] duration metric: took 400.490439ms for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114752   65177 pod_ready.go:38] duration metric: took 3.334700958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:40.114765   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:40.114821   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:40.130577   65177 api_server.go:72] duration metric: took 3.625254211s to wait for apiserver process to appear ...
	I0723 15:25:40.130607   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:40.130624   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:25:40.134690   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:25:40.135639   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:40.135658   65177 api_server.go:131] duration metric: took 5.04581ms to wait for apiserver health ...
	I0723 15:25:40.135665   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:40.318436   65177 system_pods.go:59] 9 kube-system pods found
	I0723 15:25:40.318466   65177 system_pods.go:61] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.318471   65177 system_pods.go:61] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.318474   65177 system_pods.go:61] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.318478   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.318481   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.318484   65177 system_pods.go:61] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.318487   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.318492   65177 system_pods.go:61] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.318497   65177 system_pods.go:61] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.318506   65177 system_pods.go:74] duration metric: took 182.836785ms to wait for pod list to return data ...
	I0723 15:25:40.318514   65177 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.514737   65177 default_sa.go:45] found service account: "default"
	I0723 15:25:40.514768   65177 default_sa.go:55] duration metric: took 196.245408ms for default service account to be created ...
	I0723 15:25:40.514779   65177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.718646   65177 system_pods.go:86] 9 kube-system pods found
	I0723 15:25:40.718675   65177 system_pods.go:89] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.718684   65177 system_pods.go:89] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.718690   65177 system_pods.go:89] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.718696   65177 system_pods.go:89] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.718702   65177 system_pods.go:89] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.718707   65177 system_pods.go:89] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.718713   65177 system_pods.go:89] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.718721   65177 system_pods.go:89] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.718728   65177 system_pods.go:89] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.718743   65177 system_pods.go:126] duration metric: took 203.95636ms to wait for k8s-apps to be running ...
	I0723 15:25:40.718756   65177 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.718809   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.733038   65177 system_svc.go:56] duration metric: took 14.275362ms WaitForService to wait for kubelet
	I0723 15:25:40.733069   65177 kubeadm.go:582] duration metric: took 4.227749087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.733088   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.914859   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.914886   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.914898   65177 node_conditions.go:105] duration metric: took 181.804872ms to run NodePressure ...
	I0723 15:25:40.914909   65177 start.go:241] waiting for startup goroutines ...
	I0723 15:25:40.914918   65177 start.go:246] waiting for cluster config update ...
	I0723 15:25:40.914932   65177 start.go:255] writing updated cluster config ...
	I0723 15:25:40.915235   65177 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:40.963735   65177 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:40.966048   65177 out.go:177] * Done! kubectl is now configured to use "embed-certs-486436" cluster and "default" namespace by default
	I0723 15:25:37.033161   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:25:37.039656   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:25:37.040745   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:37.040768   66641 api_server.go:131] duration metric: took 3.851781875s to wait for apiserver health ...
	I0723 15:25:37.040781   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:37.040807   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:37.040868   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:37.090495   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:37.090524   66641 cri.go:89] found id: ""
	I0723 15:25:37.090533   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:37.090608   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.094934   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:37.095005   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:37.138911   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.138937   66641 cri.go:89] found id: ""
	I0723 15:25:37.138947   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:37.139006   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.143876   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:37.143937   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:37.187419   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.187446   66641 cri.go:89] found id: ""
	I0723 15:25:37.187455   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:37.187514   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.191818   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:37.191896   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:37.232332   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:37.232358   66641 cri.go:89] found id: ""
	I0723 15:25:37.232366   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:37.232414   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.236718   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:37.236795   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:37.273231   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.273259   66641 cri.go:89] found id: ""
	I0723 15:25:37.273269   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:37.273339   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.279499   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:37.279575   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:37.316848   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.316867   66641 cri.go:89] found id: ""
	I0723 15:25:37.316875   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:37.316931   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.321920   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:37.321991   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:37.361804   66641 cri.go:89] found id: ""
	I0723 15:25:37.361833   66641 logs.go:276] 0 containers: []
	W0723 15:25:37.361844   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:37.361850   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:37.361909   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:37.401687   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:37.401715   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:37.401720   66641 cri.go:89] found id: ""
	I0723 15:25:37.401729   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:37.401788   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.406444   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.410788   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:37.410812   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:37.427033   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:37.427063   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:37.567851   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:37.567884   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.633966   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:37.634003   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.679663   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:37.679701   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.715046   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:37.715084   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.779870   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:37.779917   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:38.166491   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:38.166527   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:38.222592   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:38.222625   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:38.282823   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:38.282864   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:38.320076   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:38.320114   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:38.361845   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:38.361873   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:38.404791   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:38.404818   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:40.969345   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:25:40.969373   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.969378   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.969384   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.969388   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.969392   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.969395   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.969403   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.969407   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.969419   66641 system_pods.go:74] duration metric: took 3.928631967s to wait for pod list to return data ...
	I0723 15:25:40.969430   66641 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.971647   66641 default_sa.go:45] found service account: "default"
	I0723 15:25:40.971668   66641 default_sa.go:55] duration metric: took 2.232202ms for default service account to be created ...
	I0723 15:25:40.971675   66641 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.976760   66641 system_pods.go:86] 8 kube-system pods found
	I0723 15:25:40.976782   66641 system_pods.go:89] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.976787   66641 system_pods.go:89] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.976793   66641 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.976798   66641 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.976805   66641 system_pods.go:89] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.976809   66641 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.976818   66641 system_pods.go:89] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.976825   66641 system_pods.go:89] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.976832   66641 system_pods.go:126] duration metric: took 5.152102ms to wait for k8s-apps to be running ...
	I0723 15:25:40.976838   66641 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.976875   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.996951   66641 system_svc.go:56] duration metric: took 20.10286ms WaitForService to wait for kubelet
	I0723 15:25:40.996983   66641 kubeadm.go:582] duration metric: took 4m24.133944078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.997007   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.999958   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.999980   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.999991   66641 node_conditions.go:105] duration metric: took 2.97868ms to run NodePressure ...
	I0723 15:25:41.000002   66641 start.go:241] waiting for startup goroutines ...
	I0723 15:25:41.000008   66641 start.go:246] waiting for cluster config update ...
	I0723 15:25:41.000017   66641 start.go:255] writing updated cluster config ...
	I0723 15:25:41.000292   66641 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:41.058447   66641 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:41.060584   66641 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-911217" cluster and "default" namespace by default
	I0723 15:25:40.652692   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:42.653402   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:44.653499   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:47.153167   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:49.652723   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:51.653106   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:54.152382   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.153666   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.652308   64842 pod_ready.go:81] duration metric: took 4m0.005573507s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:56.652340   64842 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:56.652348   64842 pod_ready.go:38] duration metric: took 4m3.607231702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:56.652364   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:56.652389   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:56.652432   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:56.709002   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:56.709024   64842 cri.go:89] found id: ""
	I0723 15:25:56.709031   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:25:56.709076   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.713436   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:56.713496   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:56.748180   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:56.748203   64842 cri.go:89] found id: ""
	I0723 15:25:56.748212   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:25:56.748267   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.753878   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:56.753950   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:56.790420   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:56.790443   64842 cri.go:89] found id: ""
	I0723 15:25:56.790450   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:25:56.790503   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.794360   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:56.794430   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:56.833056   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:56.833084   64842 cri.go:89] found id: ""
	I0723 15:25:56.833093   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:25:56.833158   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.838040   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:56.838097   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:56.877548   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:56.877569   64842 cri.go:89] found id: ""
	I0723 15:25:56.877576   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:25:56.877620   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.881682   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:56.881754   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:56.931794   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:56.931821   64842 cri.go:89] found id: ""
	I0723 15:25:56.931831   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:25:56.931903   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.936454   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:56.936529   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:56.974347   64842 cri.go:89] found id: ""
	I0723 15:25:56.974373   64842 logs.go:276] 0 containers: []
	W0723 15:25:56.974401   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:56.974411   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:56.974595   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:57.008960   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:57.008986   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:25:57.008990   64842 cri.go:89] found id: ""
	I0723 15:25:57.008997   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:25:57.009044   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.013403   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.017022   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:57.017041   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:57.031010   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:57.031038   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:57.162515   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:25:57.162548   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:57.202805   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:25:57.202840   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:57.238593   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:57.238622   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:57.740811   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:25:57.740854   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:57.786125   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:57.786154   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:57.839346   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:25:57.839389   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:57.885507   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:25:57.885545   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:57.923025   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:25:57.923058   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:57.961082   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:25:57.961112   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:58.013561   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:25:58.013602   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:58.051695   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:25:58.051733   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.585802   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:26:00.601135   64842 api_server.go:72] duration metric: took 4m14.792155211s to wait for apiserver process to appear ...
	I0723 15:26:00.601167   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:26:00.601210   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:00.601269   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:00.641653   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:00.641678   64842 cri.go:89] found id: ""
	I0723 15:26:00.641687   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:00.641751   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.645831   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:00.645886   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:00.684737   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:00.684763   64842 cri.go:89] found id: ""
	I0723 15:26:00.684773   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:00.684836   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.689094   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:00.689140   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:00.725761   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:00.725787   64842 cri.go:89] found id: ""
	I0723 15:26:00.725795   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:00.725838   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.729843   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:00.729928   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:00.769870   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:00.769890   64842 cri.go:89] found id: ""
	I0723 15:26:00.769897   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:00.769942   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.774178   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:00.774235   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:00.816236   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:00.816261   64842 cri.go:89] found id: ""
	I0723 15:26:00.816268   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:00.816315   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.820577   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:00.820632   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:00.866824   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:00.866849   64842 cri.go:89] found id: ""
	I0723 15:26:00.866857   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:00.866910   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.871035   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:00.871089   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:00.913991   64842 cri.go:89] found id: ""
	I0723 15:26:00.914020   64842 logs.go:276] 0 containers: []
	W0723 15:26:00.914029   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:00.914035   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:00.914091   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:00.954766   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:00.954789   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.954795   64842 cri.go:89] found id: ""
	I0723 15:26:00.954804   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:00.954855   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.959067   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.962784   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:00.962807   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.998749   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:00.998781   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:01.454863   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:01.454902   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:01.505800   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:01.505829   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:01.555977   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:01.556008   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:01.591914   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:01.591942   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:01.649054   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:01.649083   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:01.682090   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:01.682116   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:01.721805   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:01.721832   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:01.758403   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:01.758432   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:01.808766   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:01.808803   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:01.823556   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:01.823589   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:01.936323   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:01.936355   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.478126   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:26:04.483667   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:26:04.484710   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:26:04.484730   64842 api_server.go:131] duration metric: took 3.883557615s to wait for apiserver health ...
	I0723 15:26:04.484737   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:26:04.484759   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:04.484810   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:04.522732   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:04.522757   64842 cri.go:89] found id: ""
	I0723 15:26:04.522766   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:04.522825   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.526922   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:04.526986   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:04.572736   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.572761   64842 cri.go:89] found id: ""
	I0723 15:26:04.572770   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:04.572828   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.576911   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:04.576966   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:04.612283   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.612310   64842 cri.go:89] found id: ""
	I0723 15:26:04.612318   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:04.612367   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.616609   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:04.616660   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:04.653775   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:04.653800   64842 cri.go:89] found id: ""
	I0723 15:26:04.653808   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:04.653883   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.658242   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:04.658298   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:04.699132   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.699155   64842 cri.go:89] found id: ""
	I0723 15:26:04.699164   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:04.699225   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.703672   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:04.703735   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:04.740522   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:04.740541   64842 cri.go:89] found id: ""
	I0723 15:26:04.740548   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:04.740605   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.745065   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:04.745134   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:04.779209   64842 cri.go:89] found id: ""
	I0723 15:26:04.779234   64842 logs.go:276] 0 containers: []
	W0723 15:26:04.779242   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:04.779255   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:04.779321   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:04.816696   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.816713   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:04.816718   64842 cri.go:89] found id: ""
	I0723 15:26:04.816728   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:04.816777   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.820775   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.824335   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:04.824362   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.865073   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:04.865105   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.903588   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:04.903617   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.939994   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:04.940022   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.976373   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:04.976402   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:05.355834   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:05.355877   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:05.410198   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:05.410228   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:05.424358   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:05.424391   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:05.464494   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:05.464526   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:05.496709   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:05.496736   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:05.534919   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:05.534959   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:05.640875   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:05.640913   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:05.678050   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:05.678078   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:08.236070   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:26:08.236336   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.236346   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.236351   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.236354   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.236357   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.236360   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.236368   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.236376   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.236382   64842 system_pods.go:74] duration metric: took 3.751640289s to wait for pod list to return data ...
	I0723 15:26:08.236391   64842 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:26:08.239339   64842 default_sa.go:45] found service account: "default"
	I0723 15:26:08.239367   64842 default_sa.go:55] duration metric: took 2.96931ms for default service account to be created ...
	I0723 15:26:08.239378   64842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:26:08.244406   64842 system_pods.go:86] 8 kube-system pods found
	I0723 15:26:08.244432   64842 system_pods.go:89] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.244438   64842 system_pods.go:89] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.244442   64842 system_pods.go:89] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.244447   64842 system_pods.go:89] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.244451   64842 system_pods.go:89] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.244455   64842 system_pods.go:89] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.244462   64842 system_pods.go:89] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.244468   64842 system_pods.go:89] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.244474   64842 system_pods.go:126] duration metric: took 5.091237ms to wait for k8s-apps to be running ...
	I0723 15:26:08.244481   64842 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:26:08.244521   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:08.260574   64842 system_svc.go:56] duration metric: took 16.083672ms WaitForService to wait for kubelet
	I0723 15:26:08.260610   64842 kubeadm.go:582] duration metric: took 4m22.451635049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:26:08.260634   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:26:08.263927   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:26:08.263954   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:26:08.263966   64842 node_conditions.go:105] duration metric: took 3.324706ms to run NodePressure ...
	I0723 15:26:08.263977   64842 start.go:241] waiting for startup goroutines ...
	I0723 15:26:08.263983   64842 start.go:246] waiting for cluster config update ...
	I0723 15:26:08.263992   64842 start.go:255] writing updated cluster config ...
	I0723 15:26:08.264250   64842 ssh_runner.go:195] Run: rm -f paused
	I0723 15:26:08.312776   64842 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0723 15:26:08.315009   64842 out.go:177] * Done! kubectl is now configured to use "no-preload-543029" cluster and "default" namespace by default
	I0723 15:26:54.925074   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:26:54.925180   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:26:54.926872   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:54.926940   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:54.927022   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:54.927137   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:54.927252   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:54.927339   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:54.929261   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:54.929337   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:54.929399   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:54.929472   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:54.929580   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:54.929678   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:54.929758   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:54.929836   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:54.929924   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:54.930026   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:54.930118   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:54.930165   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:54.930210   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:54.930257   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:54.930300   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:54.930371   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:54.930438   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:54.930535   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:54.930631   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:54.930663   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:54.930752   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:54.932218   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:54.932344   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:54.932445   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:54.932537   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:54.932653   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:54.932869   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:26:54.932943   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:26:54.933025   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933337   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933600   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933701   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933890   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933995   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934331   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934535   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934546   65605 kubeadm.go:310] 
	I0723 15:26:54.934600   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:26:54.934663   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:26:54.934673   65605 kubeadm.go:310] 
	I0723 15:26:54.934723   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:26:54.934762   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:26:54.934848   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:26:54.934855   65605 kubeadm.go:310] 
	I0723 15:26:54.934948   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:26:54.934979   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:26:54.935026   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:26:54.935034   65605 kubeadm.go:310] 
	I0723 15:26:54.935136   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:26:54.935255   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:26:54.935265   65605 kubeadm.go:310] 
	I0723 15:26:54.935410   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:26:54.935519   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:26:54.935578   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:26:54.935637   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:26:54.935693   65605 kubeadm.go:310] 
	W0723 15:26:54.935756   65605 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:26:54.935811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:26:55.388601   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:55.402519   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:26:55.412031   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:26:55.412054   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:26:55.412097   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:26:55.423092   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:26:55.423146   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:26:55.432321   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:26:55.441379   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:26:55.441447   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:26:55.450733   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.459263   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:26:55.459333   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.468488   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:26:55.477223   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:26:55.477277   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:26:55.485924   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:26:55.555024   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:55.555097   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:55.695658   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:55.695814   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:55.695939   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:55.867103   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:55.870203   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:55.870299   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:55.870407   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:55.870490   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:55.870568   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:55.870655   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:55.870733   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:55.870813   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:55.870861   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:55.870920   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:55.870985   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:55.871016   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:55.871063   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:55.963452   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:56.554450   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:57.109698   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:57.223533   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:57.243368   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:57.244331   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:57.244378   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:57.375340   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:57.377119   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:57.377234   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:57.386697   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:57.388552   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:57.389505   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:57.391792   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:27:37.394425   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:27:37.394534   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:37.394766   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:42.395393   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:42.395663   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:52.395847   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:52.396071   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:12.396192   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:12.396413   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395047   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:52.395369   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395384   65605 kubeadm.go:310] 
	I0723 15:28:52.395457   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:28:52.395531   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:28:52.395542   65605 kubeadm.go:310] 
	I0723 15:28:52.395588   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:28:52.395619   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:28:52.395780   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:28:52.395809   65605 kubeadm.go:310] 
	I0723 15:28:52.395964   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:28:52.396028   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:28:52.396084   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:28:52.396095   65605 kubeadm.go:310] 
	I0723 15:28:52.396194   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:28:52.396276   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:28:52.396286   65605 kubeadm.go:310] 
	I0723 15:28:52.396449   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:28:52.396552   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:28:52.396649   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:28:52.396744   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:28:52.396752   65605 kubeadm.go:310] 
	I0723 15:28:52.397220   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:28:52.397322   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:28:52.397397   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:28:52.397473   65605 kubeadm.go:394] duration metric: took 8m2.354906945s to StartCluster
	I0723 15:28:52.397516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:28:52.397573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:28:52.442298   65605 cri.go:89] found id: ""
	I0723 15:28:52.442328   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.442339   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:28:52.442347   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:28:52.442422   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:28:52.476108   65605 cri.go:89] found id: ""
	I0723 15:28:52.476131   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.476138   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:28:52.476144   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:28:52.476205   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:28:52.511118   65605 cri.go:89] found id: ""
	I0723 15:28:52.511143   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.511152   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:28:52.511159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:28:52.511224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:28:52.544901   65605 cri.go:89] found id: ""
	I0723 15:28:52.544934   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.544946   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:28:52.544954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:28:52.545020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:28:52.580472   65605 cri.go:89] found id: ""
	I0723 15:28:52.580494   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.580501   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:28:52.580515   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:28:52.580577   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:28:52.613777   65605 cri.go:89] found id: ""
	I0723 15:28:52.613808   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.613818   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:28:52.613826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:28:52.613894   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:28:52.650831   65605 cri.go:89] found id: ""
	I0723 15:28:52.650961   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.650974   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:28:52.650982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:28:52.651048   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:28:52.684805   65605 cri.go:89] found id: ""
	I0723 15:28:52.684833   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.684845   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:28:52.684857   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:28:52.684873   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:28:52.787532   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:28:52.787583   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:28:52.843947   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:28:52.843979   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:28:52.894679   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:28:52.894714   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:28:52.910794   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:28:52.910821   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:28:52.989285   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0723 15:28:52.989325   65605 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:28:52.989368   65605 out.go:239] * 
	W0723 15:28:52.989432   65605 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.989465   65605 out.go:239] * 
	W0723 15:28:52.990350   65605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:28:52.993770   65605 out.go:177] 
	W0723 15:28:52.995023   65605 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.995076   65605 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:28:52.995095   65605 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:28:52.996528   65605 out.go:177] 
	
	
	==> CRI-O <==
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.304406350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748910304364900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdc1e2d9-2d74-43ed-9c8f-37b06c5af5da name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.305399797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bfdc38c-92ff-4906-a343-92c00c715282 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.305472513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bfdc38c-92ff-4906-a343-92c00c715282 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.305706212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bfdc38c-92ff-4906-a343-92c00c715282 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.346919648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34c1b4cb-f32f-4fae-a040-a62d2578090e name=/runtime.v1.RuntimeService/Version
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.347039846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34c1b4cb-f32f-4fae-a040-a62d2578090e name=/runtime.v1.RuntimeService/Version
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.348403448Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55ad5dcd-9797-48b8-abae-470c9f7f5ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.348935346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748910348902638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55ad5dcd-9797-48b8-abae-470c9f7f5ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.349619160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75ddb0de-f539-4014-aa3d-eb5fc36cb9fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.349672428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75ddb0de-f539-4014-aa3d-eb5fc36cb9fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.350050284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75ddb0de-f539-4014-aa3d-eb5fc36cb9fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.390925667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73540a38-a402-40e5-8bae-b0791aba5ecd name=/runtime.v1.RuntimeService/Version
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.391017047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73540a38-a402-40e5-8bae-b0791aba5ecd name=/runtime.v1.RuntimeService/Version
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.392175628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fae9e159-d34b-4c83-8044-f9e791b5093b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.392611404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748910392584992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fae9e159-d34b-4c83-8044-f9e791b5093b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.393069635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fd2d70f-c12e-4b51-ad3e-a2b2d15f4109 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.393122728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fd2d70f-c12e-4b51-ad3e-a2b2d15f4109 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.393526877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fd2d70f-c12e-4b51-ad3e-a2b2d15f4109 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.427702322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a3fb66d-0eff-4359-9651-2c2edd83e240 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.427807531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a3fb66d-0eff-4359-9651-2c2edd83e240 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.428717680Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b33d023-8b07-4194-a51f-e92a283c687e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.429050565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721748910429029250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b33d023-8b07-4194-a51f-e92a283c687e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.429569753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=914eb00a-ba0b-4aef-9ac6-d3a86af064c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.429625209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=914eb00a-ba0b-4aef-9ac6-d3a86af064c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:35:10 no-preload-543029 crio[721]: time="2024-07-23 15:35:10.429845170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=914eb00a-ba0b-4aef-9ac6-d3a86af064c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33bc08508dd46       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   37c60368c52e6       storage-provisioner
	e61738b0d43a9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d73766a0dfc70       busybox
	289a796ff2c74       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   b1b956731128b       coredns-5cfdc65f69-v2bhl
	2d2d4409a7d9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   37c60368c52e6       storage-provisioner
	62a5ee505542b       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      13 minutes ago      Running             kube-proxy                1                   e0f26f6765203       kube-proxy-wzbps
	e23570772b1ba       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      13 minutes ago      Running             etcd                      1                   08b4f071b699d       etcd-no-preload-543029
	64d77a0d9b5ed       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      13 minutes ago      Running             kube-apiserver            1                   f882803b840a6       kube-apiserver-no-preload-543029
	7006aba67d59f       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      13 minutes ago      Running             kube-controller-manager   1                   8e7bc39b96f0e       kube-controller-manager-no-preload-543029
	bdf775206fb2d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      13 minutes ago      Running             kube-scheduler            1                   c78446156bfe8       kube-scheduler-no-preload-543029
	
	
	==> coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32842 - 9729 "HINFO IN 1856836756006291531.7268083712499520585. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021472439s
	
	
	==> describe nodes <==
	Name:               no-preload-543029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-543029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=no-preload-543029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_12_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:12:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-543029
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:32:24 +0000   Tue, 23 Jul 2024 15:12:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:32:24 +0000   Tue, 23 Jul 2024 15:12:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:32:24 +0000   Tue, 23 Jul 2024 15:12:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:32:24 +0000   Tue, 23 Jul 2024 15:21:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.227
	  Hostname:    no-preload-543029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb6b4649da84ee099e27e146836b0c7
	  System UUID:                9eb6b464-9da8-4ee0-99e2-7e146836b0c7
	  Boot ID:                    dc32264d-9a14-4f6d-bd66-36c40076c1e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5cfdc65f69-v2bhl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-543029                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-543029             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-543029    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-wzbps                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-543029             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-78fcd8795b-dsfmg              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-543029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-543029 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-543029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-543029 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node no-preload-543029 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-543029 event: Registered Node no-preload-543029 in Controller
	  Normal  CIDRAssignmentFailed     22m                cidrAllocator    Node no-preload-543029 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-543029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientMemory
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-543029 event: Registered Node no-preload-543029 in Controller
	
	
	==> dmesg <==
	[Jul23 15:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051490] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039765] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942519] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.942725] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.604953] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.029763] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.065548] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059306] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.177026] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.113880] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.287990] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[ +14.664725] systemd-fstab-generator[1164]: Ignoring "noauto" option for root device
	[  +0.063194] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.795946] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +5.035998] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.506659] systemd-fstab-generator[1917]: Ignoring "noauto" option for root device
	[  +3.773412] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.070787] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] <==
	{"level":"info","ts":"2024-07-23T15:21:39.860036Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:21:39.86884Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T15:21:39.872355Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4d07b7f8888ec66","initial-advertise-peer-urls":["https://192.168.72.227:2380"],"listen-peer-urls":["https://192.168.72.227:2380"],"advertise-client-urls":["https://192.168.72.227:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.227:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T15:21:39.87257Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T15:21:39.871334Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.227:2380"}
	{"level":"info","ts":"2024-07-23T15:21:39.875419Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.227:2380"}
	{"level":"info","ts":"2024-07-23T15:21:41.518962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d07b7f8888ec66 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T15:21:41.519045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d07b7f8888ec66 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:21:41.519082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d07b7f8888ec66 received MsgPreVoteResp from 4d07b7f8888ec66 at term 2"}
	{"level":"info","ts":"2024-07-23T15:21:41.519095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d07b7f8888ec66 became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:41.519101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d07b7f8888ec66 received MsgVoteResp from 4d07b7f8888ec66 at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:41.519109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d07b7f8888ec66 became leader at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:41.519126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d07b7f8888ec66 elected leader 4d07b7f8888ec66 at term 3"}
	{"level":"info","ts":"2024-07-23T15:21:41.520772Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4d07b7f8888ec66","local-member-attributes":"{Name:no-preload-543029 ClientURLs:[https://192.168.72.227:2379]}","request-path":"/0/members/4d07b7f8888ec66/attributes","cluster-id":"1ac78debd130abb5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:21:41.520818Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:21:41.520787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:21:41.521303Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:21:41.521381Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:21:41.522Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-23T15:21:41.522003Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-23T15:21:41.522842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.227:2379"}
	{"level":"info","ts":"2024-07-23T15:21:41.523222Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T15:31:41.549288Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-07-23T15:31:41.560293Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":860,"took":"10.680134ms","hash":2585086080,"current-db-size-bytes":2830336,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2830336,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-23T15:31:41.560356Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2585086080,"revision":860,"compact-revision":-1}
	
	
	==> kernel <==
	 15:35:10 up 14 min,  0 users,  load average: 0.53, 0.22, 0.12
	Linux no-preload-543029 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] <==
	W0723 15:31:43.815826       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:31:43.815929       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0723 15:31:43.816882       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 15:31:43.816969       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:32:43.817902       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:32:43.817994       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0723 15:32:43.817902       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:32:43.818105       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0723 15:32:43.819390       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 15:32:43.819417       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:34:43.820611       1 handler_proxy.go:99] no RequestInfo found in the context
	W0723 15:34:43.820893       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:34:43.821105       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0723 15:34:43.821134       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0723 15:34:43.822359       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 15:34:43.822442       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] <==
	E0723 15:29:47.526875       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:29:47.534702       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:30:17.532963       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:30:17.544623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:30:47.540881       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:30:47.552555       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:31:17.546663       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:31:17.559930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:31:47.552921       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:31:47.567040       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:32:17.559012       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:32:17.574306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:32:24.223840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-543029"
	I0723 15:32:44.669516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="367.634µs"
	E0723 15:32:47.566612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:32:47.583014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:32:56.656385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="209.559µs"
	E0723 15:33:17.573359       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:33:17.590028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:33:47.580162       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:33:47.597294       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:34:17.585872       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:34:17.604795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:34:47.593553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:34:47.614374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0723 15:21:44.314729       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0723 15:21:44.329783       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.227"]
	E0723 15:21:44.330006       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0723 15:21:44.411670       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0723 15:21:44.411757       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 15:21:44.411811       1 server_linux.go:170] "Using iptables Proxier"
	I0723 15:21:44.414903       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0723 15:21:44.415303       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0723 15:21:44.415337       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:44.418406       1 config.go:197] "Starting service config controller"
	I0723 15:21:44.418485       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:21:44.418601       1 config.go:104] "Starting endpoint slice config controller"
	I0723 15:21:44.418665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:21:44.421063       1 config.go:326] "Starting node config controller"
	I0723 15:21:44.421129       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:21:44.519316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:21:44.519853       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:21:44.521246       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] <==
	I0723 15:21:40.560567       1 serving.go:386] Generated self-signed cert in-memory
	I0723 15:21:42.832716       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0723 15:21:42.832760       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:42.839433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 15:21:42.839761       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0723 15:21:42.839903       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0723 15:21:42.840263       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0723 15:21:42.841519       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 15:21:42.841548       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:21:42.841908       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0723 15:21:42.841947       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0723 15:21:42.940512       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0723 15:21:42.941895       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:21:42.942258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 23 15:32:38 no-preload-543029 kubelet[1293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:32:38 no-preload-543029 kubelet[1293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:32:38 no-preload-543029 kubelet[1293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:32:44 no-preload-543029 kubelet[1293]: E0723 15:32:44.649262    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:32:56 no-preload-543029 kubelet[1293]: E0723 15:32:56.640784    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:33:09 no-preload-543029 kubelet[1293]: E0723 15:33:09.640150    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:33:23 no-preload-543029 kubelet[1293]: E0723 15:33:23.640128    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:33:37 no-preload-543029 kubelet[1293]: E0723 15:33:37.640475    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:33:38 no-preload-543029 kubelet[1293]: E0723 15:33:38.659355    1293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:33:38 no-preload-543029 kubelet[1293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:33:38 no-preload-543029 kubelet[1293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:33:38 no-preload-543029 kubelet[1293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:33:38 no-preload-543029 kubelet[1293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:33:48 no-preload-543029 kubelet[1293]: E0723 15:33:48.643214    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:34:03 no-preload-543029 kubelet[1293]: E0723 15:34:03.639107    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:34:14 no-preload-543029 kubelet[1293]: E0723 15:34:14.641704    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:34:26 no-preload-543029 kubelet[1293]: E0723 15:34:26.640712    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:34:38 no-preload-543029 kubelet[1293]: E0723 15:34:38.654345    1293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:34:38 no-preload-543029 kubelet[1293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:34:38 no-preload-543029 kubelet[1293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:34:38 no-preload-543029 kubelet[1293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:34:38 no-preload-543029 kubelet[1293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:34:41 no-preload-543029 kubelet[1293]: E0723 15:34:41.640511    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:34:56 no-preload-543029 kubelet[1293]: E0723 15:34:56.639530    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:35:08 no-preload-543029 kubelet[1293]: E0723 15:35:08.640389    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	
	
	==> storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] <==
	I0723 15:21:44.274794       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0723 15:22:14.278531       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] <==
	I0723 15:22:14.915644       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 15:22:14.924832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 15:22:14.924908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 15:22:32.330977       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 15:22:32.331276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-543029_4a75ce72-4451-43bb-bb47-de07b27b1841!
	I0723 15:22:32.332810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e25f0429-873a-43a8-b4e4-8a434517782e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-543029_4a75ce72-4451-43bb-bb47-de07b27b1841 became leader
	I0723 15:22:32.432098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-543029_4a75ce72-4451-43bb-bb47-de07b27b1841!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-543029 -n no-preload-543029
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-543029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-dsfmg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-543029 describe pod metrics-server-78fcd8795b-dsfmg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-543029 describe pod metrics-server-78fcd8795b-dsfmg: exit status 1 (62.869583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-dsfmg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-543029 describe pod metrics-server-78fcd8795b-dsfmg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0723 15:29:49.699678   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0723 15:32:11.819568   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0723 15:32:52.749413   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0723 15:34:49.699718   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
E0723 15:37:11.818749   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (218.072416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-000272" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (215.220393ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-000272 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-000272 logs -n 25: (1.572637668s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-193974                              | stopped-upgrade-193974       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-543029             | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-543029                                   | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-486436            | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272        | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-518198 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | disable-driver-mounts-518198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:18:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:18:41.988416   66641 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:18:41.988512   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988520   66641 out.go:304] Setting ErrFile to fd 2...
	I0723 15:18:41.988525   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988683   66641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:18:41.989181   66641 out.go:298] Setting JSON to false
	I0723 15:18:41.990049   66641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7268,"bootTime":1721740654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:18:41.990101   66641 start.go:139] virtualization: kvm guest
	I0723 15:18:41.992106   66641 out.go:177] * [default-k8s-diff-port-911217] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:18:41.993366   66641 notify.go:220] Checking for updates...
	I0723 15:18:41.993387   66641 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:18:41.994650   66641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:18:41.995849   66641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:18:41.997045   66641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:18:41.998236   66641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:18:41.999412   66641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:18:42.001155   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:18:42.001533   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.001596   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.016186   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0723 15:18:42.016616   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.017209   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.017230   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.017528   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.017699   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.017927   66641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:18:42.018205   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.018235   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.032467   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0723 15:18:42.032800   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.033214   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.033236   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.033544   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.033718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.065773   66641 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:18:42.067127   66641 start.go:297] selected driver: kvm2
	I0723 15:18:42.067142   66641 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.067236   66641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:18:42.067871   66641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.067939   66641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:18:42.083220   66641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:18:42.083563   66641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:18:42.083627   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:18:42.083641   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:18:42.083677   66641 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.083772   66641 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.085608   66641 out.go:177] * Starting "default-k8s-diff-port-911217" primary control-plane node in "default-k8s-diff-port-911217" cluster
	I0723 15:18:42.394642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:42.086917   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:18:42.086954   66641 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:18:42.086961   66641 cache.go:56] Caching tarball of preloaded images
	I0723 15:18:42.087024   66641 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:18:42.087034   66641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:18:42.087125   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:18:42.087294   66641 start.go:360] acquireMachinesLock for default-k8s-diff-port-911217: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:18:45.466731   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:51.546673   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:54.618775   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:00.698667   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:03.770734   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:09.850627   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:12.922681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:19.002679   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:22.074678   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:28.154680   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:31.226704   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:37.306625   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:40.378652   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:46.458657   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:49.530693   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:55.610642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:58.682681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:20:01.686613   65177 start.go:364] duration metric: took 4m13.413067096s to acquireMachinesLock for "embed-certs-486436"
	I0723 15:20:01.686692   65177 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:01.686702   65177 fix.go:54] fixHost starting: 
	I0723 15:20:01.687041   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:01.687070   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:01.702700   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0723 15:20:01.703107   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:01.703623   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:20:01.703649   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:01.704019   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:01.704222   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:01.704417   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:20:01.706547   65177 fix.go:112] recreateIfNeeded on embed-certs-486436: state=Stopped err=<nil>
	I0723 15:20:01.706583   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	W0723 15:20:01.706810   65177 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:01.708411   65177 out.go:177] * Restarting existing kvm2 VM for "embed-certs-486436" ...
	I0723 15:20:01.709393   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Start
	I0723 15:20:01.709559   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring networks are active...
	I0723 15:20:01.710353   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network default is active
	I0723 15:20:01.710733   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network mk-embed-certs-486436 is active
	I0723 15:20:01.711060   65177 main.go:141] libmachine: (embed-certs-486436) Getting domain xml...
	I0723 15:20:01.711832   65177 main.go:141] libmachine: (embed-certs-486436) Creating domain...
	I0723 15:20:02.915930   65177 main.go:141] libmachine: (embed-certs-486436) Waiting to get IP...
	I0723 15:20:02.916770   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:02.917115   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:02.917188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:02.917097   66959 retry.go:31] will retry after 245.483954ms: waiting for machine to come up
	I0723 15:20:01.683920   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:01.683992   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684333   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:20:01.684360   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684537   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:20:01.686489   64842 machine.go:97] duration metric: took 4m34.539799868s to provisionDockerMachine
	I0723 15:20:01.686530   64842 fix.go:56] duration metric: took 4m34.563243323s for fixHost
	I0723 15:20:01.686547   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 4m34.563294357s
	W0723 15:20:01.686572   64842 start.go:714] error starting host: provision: host is not running
	W0723 15:20:01.686657   64842 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0723 15:20:01.686668   64842 start.go:729] Will try again in 5 seconds ...
	I0723 15:20:03.164587   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.165021   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.165067   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.164972   66959 retry.go:31] will retry after 387.950176ms: waiting for machine to come up
	I0723 15:20:03.554705   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.555161   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.555188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.555103   66959 retry.go:31] will retry after 404.807138ms: waiting for machine to come up
	I0723 15:20:03.961830   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.962290   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.962323   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.962236   66959 retry.go:31] will retry after 570.61318ms: waiting for machine to come up
	I0723 15:20:04.534152   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:04.534702   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:04.534731   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:04.534650   66959 retry.go:31] will retry after 542.857217ms: waiting for machine to come up
	I0723 15:20:05.079445   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.079866   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.079894   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.079811   66959 retry.go:31] will retry after 653.88428ms: waiting for machine to come up
	I0723 15:20:05.735919   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.736350   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.736381   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.736331   66959 retry.go:31] will retry after 871.798617ms: waiting for machine to come up
	I0723 15:20:06.609428   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:06.609885   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:06.609908   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:06.609854   66959 retry.go:31] will retry after 1.079464189s: waiting for machine to come up
	I0723 15:20:07.690706   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:07.691096   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:07.691122   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:07.691070   66959 retry.go:31] will retry after 1.414145571s: waiting for machine to come up
	I0723 15:20:06.687299   64842 start.go:360] acquireMachinesLock for no-preload-543029: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:20:09.107698   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:09.108062   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:09.108091   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:09.108012   66959 retry.go:31] will retry after 2.263313118s: waiting for machine to come up
	I0723 15:20:11.374573   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:11.375009   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:11.375035   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:11.374970   66959 retry.go:31] will retry after 2.600297505s: waiting for machine to come up
	I0723 15:20:13.978265   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:13.978707   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:13.978733   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:13.978653   66959 retry.go:31] will retry after 2.515380756s: waiting for machine to come up
	I0723 15:20:16.497458   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:16.497913   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:16.497945   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:16.497872   66959 retry.go:31] will retry after 3.863044954s: waiting for machine to come up
	I0723 15:20:21.587107   65605 start.go:364] duration metric: took 3m54.633068774s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:20:21.587168   65605 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:21.587179   65605 fix.go:54] fixHost starting: 
	I0723 15:20:21.587596   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:21.587632   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:21.608083   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0723 15:20:21.608563   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:21.609109   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:20:21.609148   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:21.609463   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:21.609679   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:21.609839   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:20:21.611555   65605 fix.go:112] recreateIfNeeded on old-k8s-version-000272: state=Stopped err=<nil>
	I0723 15:20:21.611590   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	W0723 15:20:21.611766   65605 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:21.614168   65605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	I0723 15:20:21.615607   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .Start
	I0723 15:20:21.615831   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:20:21.616640   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:20:21.617122   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:20:21.617591   65605 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:20:21.618346   65605 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:20:20.365141   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365653   65177 main.go:141] libmachine: (embed-certs-486436) Found IP for machine: 192.168.39.200
	I0723 15:20:20.365671   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365677   65177 main.go:141] libmachine: (embed-certs-486436) Reserving static IP address...
	I0723 15:20:20.366319   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.366340   65177 main.go:141] libmachine: (embed-certs-486436) DBG | skip adding static IP to network mk-embed-certs-486436 - found existing host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"}
	I0723 15:20:20.366351   65177 main.go:141] libmachine: (embed-certs-486436) Reserved static IP address: 192.168.39.200
	I0723 15:20:20.366360   65177 main.go:141] libmachine: (embed-certs-486436) Waiting for SSH to be available...
	I0723 15:20:20.366367   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Getting to WaitForSSH function...
	I0723 15:20:20.368870   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369217   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.369239   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369431   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH client type: external
	I0723 15:20:20.369462   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa (-rw-------)
	I0723 15:20:20.369485   65177 main.go:141] libmachine: (embed-certs-486436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:20.369495   65177 main.go:141] libmachine: (embed-certs-486436) DBG | About to run SSH command:
	I0723 15:20:20.369505   65177 main.go:141] libmachine: (embed-certs-486436) DBG | exit 0
	I0723 15:20:20.494158   65177 main.go:141] libmachine: (embed-certs-486436) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:20.494591   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetConfigRaw
	I0723 15:20:20.495255   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.497821   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498094   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.498124   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498346   65177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/config.json ...
	I0723 15:20:20.498558   65177 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:20.498577   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:20.498808   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.500819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501138   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.501166   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.501481   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501643   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501770   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.501926   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.502215   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.502231   65177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:20.606234   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:20.606264   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606556   65177 buildroot.go:166] provisioning hostname "embed-certs-486436"
	I0723 15:20:20.606598   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606793   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.609446   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609801   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.609838   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609990   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.610137   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610468   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.610650   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.610813   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.610825   65177 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-486436 && echo "embed-certs-486436" | sudo tee /etc/hostname
	I0723 15:20:20.727215   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-486436
	
	I0723 15:20:20.727239   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.730058   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730363   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.730411   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730552   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.730741   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.730911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.731048   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.731204   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.731364   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.731380   65177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-486436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-486436/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-486436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:20.844079   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:20.844109   65177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:20.844128   65177 buildroot.go:174] setting up certificates
	I0723 15:20:20.844135   65177 provision.go:84] configureAuth start
	I0723 15:20:20.844145   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.844400   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.846867   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847192   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.847220   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847342   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.849457   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849786   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.849829   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849937   65177 provision.go:143] copyHostCerts
	I0723 15:20:20.849992   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:20.850002   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:20.850068   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:20.850164   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:20.850172   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:20.850201   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:20.850263   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:20.850272   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:20.850293   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:20.850358   65177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.embed-certs-486436 san=[127.0.0.1 192.168.39.200 embed-certs-486436 localhost minikube]
	I0723 15:20:20.945454   65177 provision.go:177] copyRemoteCerts
	I0723 15:20:20.945511   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:20.945536   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.948316   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948605   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.948639   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948797   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.948981   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.949142   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.949267   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.032367   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:20:21.054529   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:21.076049   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 15:20:21.098274   65177 provision.go:87] duration metric: took 254.126202ms to configureAuth
	I0723 15:20:21.098303   65177 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:21.098510   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:20:21.098600   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.100971   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101307   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.101341   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101520   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.101687   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.101828   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.102031   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.102187   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.102375   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.102418   65177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:21.359179   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:21.359214   65177 machine.go:97] duration metric: took 860.640697ms to provisionDockerMachine
	I0723 15:20:21.359230   65177 start.go:293] postStartSetup for "embed-certs-486436" (driver="kvm2")
	I0723 15:20:21.359244   65177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:21.359265   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.359777   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:21.359804   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.362611   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.362936   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.362963   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.363138   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.363311   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.363497   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.363669   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.444572   65177 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:21.448633   65177 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:21.448662   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:21.448733   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:21.448817   65177 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:21.448925   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:21.457699   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:21.480387   65177 start.go:296] duration metric: took 121.140622ms for postStartSetup
	I0723 15:20:21.480431   65177 fix.go:56] duration metric: took 19.793728867s for fixHost
	I0723 15:20:21.480449   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.483324   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483667   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.483690   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483854   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.484057   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484211   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484353   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.484516   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.484692   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.484703   65177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:21.586960   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748021.563549452
	
	I0723 15:20:21.586982   65177 fix.go:216] guest clock: 1721748021.563549452
	I0723 15:20:21.586989   65177 fix.go:229] Guest: 2024-07-23 15:20:21.563549452 +0000 UTC Remote: 2024-07-23 15:20:21.480435025 +0000 UTC m=+273.351160165 (delta=83.114427ms)
	I0723 15:20:21.587010   65177 fix.go:200] guest clock delta is within tolerance: 83.114427ms
	I0723 15:20:21.587016   65177 start.go:83] releasing machines lock for "embed-certs-486436", held for 19.900344761s
	I0723 15:20:21.587045   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.587363   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:21.590600   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.590998   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.591041   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.591194   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591723   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591965   65177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:21.592024   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.592172   65177 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:21.592190   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.594877   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595266   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595337   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595387   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595502   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595698   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.595751   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595776   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595837   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.595909   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595998   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.596083   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.596218   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.596369   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.709871   65177 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:21.717210   65177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:21.866461   65177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:21.871904   65177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:21.871979   65177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:21.888197   65177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:21.888226   65177 start.go:495] detecting cgroup driver to use...
	I0723 15:20:21.888339   65177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:21.903857   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:21.917841   65177 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:21.917917   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:21.935814   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:21.949898   65177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:22.066137   65177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:22.208517   65177 docker.go:233] disabling docker service ...
	I0723 15:20:22.208606   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:22.222583   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:22.235322   65177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:22.380324   65177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:22.513404   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:22.529676   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:22.546980   65177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:20:22.547050   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.556656   65177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:22.556723   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.566410   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.576269   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.586125   65177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:22.597824   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.608136   65177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.628391   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.642862   65177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:22.652564   65177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:22.652625   65177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:22.667485   65177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:22.677669   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:22.809762   65177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:22.947870   65177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:22.947955   65177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:22.952570   65177 start.go:563] Will wait 60s for crictl version
	I0723 15:20:22.952672   65177 ssh_runner.go:195] Run: which crictl
	I0723 15:20:22.956658   65177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:22.997591   65177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:22.997719   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.030830   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.060406   65177 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:20:23.061617   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:23.065154   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065547   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:23.065572   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065845   65177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:23.070019   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:23.082226   65177 kubeadm.go:883] updating cluster {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:23.082414   65177 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:20:23.082490   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:23.117427   65177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:20:23.117505   65177 ssh_runner.go:195] Run: which lz4
	I0723 15:20:23.121380   65177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:23.125694   65177 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:23.125721   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:20:22.904910   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:20:22.905969   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:22.906448   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:22.906508   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:22.906424   67094 retry.go:31] will retry after 215.638875ms: waiting for machine to come up
	I0723 15:20:23.124008   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.124474   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.124510   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.124440   67094 retry.go:31] will retry after 380.753429ms: waiting for machine to come up
	I0723 15:20:23.507362   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.507777   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.507803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.507744   67094 retry.go:31] will retry after 385.253161ms: waiting for machine to come up
	I0723 15:20:23.894227   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.894675   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.894697   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.894627   67094 retry.go:31] will retry after 533.715559ms: waiting for machine to come up
	I0723 15:20:24.429811   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:24.430290   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:24.430321   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:24.430242   67094 retry.go:31] will retry after 637.033889ms: waiting for machine to come up
	I0723 15:20:25.068770   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.069313   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.069345   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.069274   67094 retry.go:31] will retry after 796.484567ms: waiting for machine to come up
	I0723 15:20:25.867223   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.867663   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.867693   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.867604   67094 retry.go:31] will retry after 845.920319ms: waiting for machine to come up
	I0723 15:20:26.715077   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:26.715612   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:26.715643   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:26.715566   67094 retry.go:31] will retry after 1.265268276s: waiting for machine to come up
	I0723 15:20:24.399306   65177 crio.go:462] duration metric: took 1.277970642s to copy over tarball
	I0723 15:20:24.399409   65177 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:26.603797   65177 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204354868s)
	I0723 15:20:26.603830   65177 crio.go:469] duration metric: took 2.204493799s to extract the tarball
	I0723 15:20:26.603839   65177 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:26.641498   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:26.682771   65177 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:20:26.682793   65177 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:20:26.682802   65177 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0723 15:20:26.682948   65177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-486436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:26.683021   65177 ssh_runner.go:195] Run: crio config
	I0723 15:20:26.734908   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:26.734934   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:26.734947   65177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:26.734979   65177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-486436 NodeName:embed-certs-486436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:20:26.735162   65177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-486436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:26.735247   65177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:20:26.746266   65177 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:26.746334   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:26.756387   65177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0723 15:20:26.771870   65177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:26.789639   65177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0723 15:20:26.807608   65177 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:26.811134   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:26.823851   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:26.952899   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:26.969453   65177 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436 for IP: 192.168.39.200
	I0723 15:20:26.969484   65177 certs.go:194] generating shared ca certs ...
	I0723 15:20:26.969503   65177 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:26.969694   65177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:26.969757   65177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:26.969770   65177 certs.go:256] generating profile certs ...
	I0723 15:20:26.969897   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/client.key
	I0723 15:20:26.969978   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key.8481dffb
	I0723 15:20:26.970038   65177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key
	I0723 15:20:26.970164   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:26.970203   65177 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:26.970216   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:26.970255   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:26.970279   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:26.970309   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:26.970369   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:26.971269   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:27.026302   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:27.075563   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:27.109194   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:27.136748   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0723 15:20:27.159391   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:20:27.181933   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:27.203549   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:27.225473   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:27.254497   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:27.275874   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:27.299275   65177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:27.316223   65177 ssh_runner.go:195] Run: openssl version
	I0723 15:20:27.322037   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:27.333546   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337890   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337945   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.343624   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:27.354738   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:27.365915   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370038   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370101   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.375514   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:27.386502   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:27.396611   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400879   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400978   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.406132   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:27.415738   65177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:27.419755   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:27.424982   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:27.430277   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:27.435794   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:27.441244   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:27.446515   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:27.451968   65177 kubeadm.go:392] StartCluster: {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:27.452053   65177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:27.452102   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.488671   65177 cri.go:89] found id: ""
	I0723 15:20:27.488758   65177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:27.498621   65177 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:27.498639   65177 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:27.498690   65177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:27.510485   65177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:27.511796   65177 kubeconfig.go:125] found "embed-certs-486436" server: "https://192.168.39.200:8443"
	I0723 15:20:27.513749   65177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:27.525206   65177 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I0723 15:20:27.525258   65177 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:27.525275   65177 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:27.525354   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.563337   65177 cri.go:89] found id: ""
	I0723 15:20:27.563411   65177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:27.583886   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:27.595493   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:27.595513   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:27.595591   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:27.606537   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:27.606596   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:27.616130   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:27.624277   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:27.624335   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:27.632787   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.641057   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:27.641113   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.649516   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:27.657977   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:27.658021   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:27.666489   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:27.675023   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.777750   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.982818   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:27.983136   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:27.983157   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:27.983112   67094 retry.go:31] will retry after 1.681215174s: waiting for machine to come up
	I0723 15:20:29.667369   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:29.667816   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:29.667846   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:29.667773   67094 retry.go:31] will retry after 1.742302977s: waiting for machine to come up
	I0723 15:20:31.412567   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:31.413046   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:31.413074   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:31.412990   67094 retry.go:31] will retry after 2.618033682s: waiting for machine to come up
	I0723 15:20:28.659756   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.867793   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.952107   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:29.020498   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:29.020632   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:29.521001   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.021488   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.520765   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.021749   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.521145   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.535745   65177 api_server.go:72] duration metric: took 2.515246955s to wait for apiserver process to appear ...
	I0723 15:20:31.535779   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:20:31.535802   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.561351   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.561400   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:33.561416   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.580699   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.580735   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:34.036231   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.045563   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.045603   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:34.536119   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.549417   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.549447   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:35.035956   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:35.040331   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:20:35.046883   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:20:35.046909   65177 api_server.go:131] duration metric: took 3.511123729s to wait for apiserver health ...
	I0723 15:20:35.046918   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:35.046924   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:35.048858   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:20:34.034295   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:34.034660   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:34.034682   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:34.034634   67094 retry.go:31] will retry after 2.832404848s: waiting for machine to come up
	I0723 15:20:35.050411   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:20:35.061924   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:20:35.088990   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:20:35.102746   65177 system_pods.go:59] 8 kube-system pods found
	I0723 15:20:35.102778   65177 system_pods.go:61] "coredns-7db6d8ff4d-v842j" [f3509de1-edf7-46c4-af5b-89338770d2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:20:35.102786   65177 system_pods.go:61] "etcd-embed-certs-486436" [46b72abd-c16d-452d-8c17-909fd2a25fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:20:35.102796   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [2ce2344f-5ddc-438b-8f16-338bc266da83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:20:35.102804   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [3f483328-583f-4c71-8372-db418f593b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:20:35.102812   65177 system_pods.go:61] "kube-proxy-f4vfh" [00e430df-ccc5-463d-96f9-288e2e611e2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:20:35.102822   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [0c581c3d-78ab-47d8-81a8-9d176192a94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:20:35.102829   65177 system_pods.go:61] "metrics-server-569cc877fc-rq67z" [b6371591-2fac-47f5-b20b-635c9f0755c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:20:35.102840   65177 system_pods.go:61] "storage-provisioner" [a0545674-2bfc-48b4-940e-cdedf02c5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:20:35.102849   65177 system_pods.go:74] duration metric: took 13.834305ms to wait for pod list to return data ...
	I0723 15:20:35.102857   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:20:35.106953   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:20:35.106977   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:20:35.106991   65177 node_conditions.go:105] duration metric: took 4.127613ms to run NodePressure ...
	I0723 15:20:35.107010   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:35.395355   65177 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399496   65177 kubeadm.go:739] kubelet initialised
	I0723 15:20:35.399514   65177 kubeadm.go:740] duration metric: took 4.133847ms waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399521   65177 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:20:35.404293   65177 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.408404   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408423   65177 pod_ready.go:81] duration metric: took 4.111276ms for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.408431   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408440   65177 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.412361   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412379   65177 pod_ready.go:81] duration metric: took 3.929729ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.412391   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412403   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.416588   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416603   65177 pod_ready.go:81] duration metric: took 4.193735ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.416610   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416616   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.492691   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492715   65177 pod_ready.go:81] duration metric: took 76.092496ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.492724   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492731   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892820   65177 pod_ready.go:92] pod "kube-proxy-f4vfh" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:35.892843   65177 pod_ready.go:81] duration metric: took 400.103193ms for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892853   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:37.898159   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:36.869147   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:36.869555   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:36.869593   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:36.869499   67094 retry.go:31] will retry after 4.334096738s: waiting for machine to come up
	I0723 15:20:41.208992   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209340   65605 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:20:41.209364   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:20:41.209382   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209808   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.209843   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | skip adding static IP to network mk-old-k8s-version-000272 - found existing host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"}
	I0723 15:20:41.209862   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:20:41.209878   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:20:41.209916   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:20:41.211671   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.211918   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.211956   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.212110   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:20:41.212139   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:20:41.212191   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:41.212211   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:20:41.212229   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:20:41.334852   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:41.335260   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:20:41.335965   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.338425   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.338803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.338842   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.339024   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:20:41.339218   65605 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:41.339235   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:41.339476   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.341528   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.341881   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.341909   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.342008   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.342192   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342352   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342502   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.342674   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.342855   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.342865   65605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:41.442564   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:41.442592   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.442857   65605 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:20:41.442872   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.443076   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.445976   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446389   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.446429   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446553   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.446719   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.446972   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.447096   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.447249   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.447418   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.447434   65605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:20:41.559708   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:20:41.559739   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.562630   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.562954   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.562977   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.563156   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.563340   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563501   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563596   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.563779   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.563977   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.564006   65605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:41.671327   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:41.671363   65605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:41.671396   65605 buildroot.go:174] setting up certificates
	I0723 15:20:41.671407   65605 provision.go:84] configureAuth start
	I0723 15:20:41.671418   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.671766   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.674340   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.674812   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.674848   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.675019   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.677052   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677386   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.677418   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677568   65605 provision.go:143] copyHostCerts
	I0723 15:20:41.677636   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:41.677651   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:41.677715   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:41.677826   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:41.677836   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:41.677866   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:41.677939   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:41.677949   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:41.677975   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:41.678039   65605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:20:41.745999   65605 provision.go:177] copyRemoteCerts
	I0723 15:20:41.746077   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:41.746123   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.748908   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749226   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.749252   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749417   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.749616   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.749771   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.749903   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:41.828867   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:42.386874   66641 start.go:364] duration metric: took 2m0.299552173s to acquireMachinesLock for "default-k8s-diff-port-911217"
	I0723 15:20:42.386943   66641 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:42.386951   66641 fix.go:54] fixHost starting: 
	I0723 15:20:42.387316   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:42.387356   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:42.405492   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0723 15:20:42.405947   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:42.406493   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:20:42.406517   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:42.406843   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:42.407031   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:20:42.407169   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:20:42.408621   66641 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911217: state=Stopped err=<nil>
	I0723 15:20:42.408657   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	W0723 15:20:42.408798   66641 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:42.410540   66641 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-911217" ...
	I0723 15:20:39.899515   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.903102   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.852296   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:20:41.874579   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:20:41.897065   65605 provision.go:87] duration metric: took 225.644058ms to configureAuth
	I0723 15:20:41.897095   65605 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:41.897287   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:20:41.897354   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.900232   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902335   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.902328   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.902412   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902623   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.902826   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.903015   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.903209   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.903388   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.903407   65605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:42.162998   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:42.163019   65605 machine.go:97] duration metric: took 823.789368ms to provisionDockerMachine
	I0723 15:20:42.163030   65605 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:20:42.163040   65605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:42.163054   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.163444   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:42.163471   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.166193   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.166628   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.166842   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.167037   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.167181   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.248364   65605 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:42.252403   65605 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:42.252433   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:42.252504   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:42.252596   65605 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:42.252693   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:42.262571   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:42.285115   65605 start.go:296] duration metric: took 122.072017ms for postStartSetup
	I0723 15:20:42.285160   65605 fix.go:56] duration metric: took 20.697977265s for fixHost
	I0723 15:20:42.285180   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.287760   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288032   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.288062   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288187   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.288428   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288606   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288799   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.289000   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:42.289216   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:42.289232   65605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:42.386682   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748042.363547028
	
	I0723 15:20:42.386711   65605 fix.go:216] guest clock: 1721748042.363547028
	I0723 15:20:42.386723   65605 fix.go:229] Guest: 2024-07-23 15:20:42.363547028 +0000 UTC Remote: 2024-07-23 15:20:42.285164316 +0000 UTC m=+255.470399434 (delta=78.382712ms)
	I0723 15:20:42.386754   65605 fix.go:200] guest clock delta is within tolerance: 78.382712ms
	I0723 15:20:42.386765   65605 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 20.799620907s
	I0723 15:20:42.386796   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.387067   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:42.390116   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390543   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.390589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390703   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391215   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391395   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391482   65605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:42.391527   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.391645   65605 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:42.391670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.394373   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394732   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.394757   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394924   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395081   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395245   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.395286   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.395331   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.395428   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.395579   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395726   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395963   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.396145   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.499940   65605 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:42.505917   65605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:42.646731   65605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:42.652550   65605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:42.652612   65605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:42.667337   65605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:42.667357   65605 start.go:495] detecting cgroup driver to use...
	I0723 15:20:42.667419   65605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:42.681839   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:42.694833   65605 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:42.694888   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:42.707800   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:42.720914   65605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:42.844082   65605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:43.024993   65605 docker.go:233] disabling docker service ...
	I0723 15:20:43.025076   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:43.057263   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:43.070881   65605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:43.180616   65605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:43.295769   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:43.311341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:43.333719   65605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:20:43.333787   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.345261   65605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:43.345364   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.356669   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.366947   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.378177   65605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:43.390672   65605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:43.400591   65605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:43.400645   65605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:43.413974   65605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:43.423528   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:43.545030   65605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:43.685902   65605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:43.686018   65605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:43.691692   65605 start.go:563] Will wait 60s for crictl version
	I0723 15:20:43.691742   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:43.695470   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:43.733229   65605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:43.733329   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.765591   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.794762   65605 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:20:43.796073   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:43.799075   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799549   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:43.799585   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799780   65605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:43.803604   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:43.818919   65605 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:43.819019   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:20:43.819073   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:43.872208   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:43.872268   65605 ssh_runner.go:195] Run: which lz4
	I0723 15:20:43.876273   65605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:43.880532   65605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:43.880566   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:20:45.299916   65605 crio.go:462] duration metric: took 1.423681931s to copy over tarball
	I0723 15:20:45.299989   65605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:42.411787   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Start
	I0723 15:20:42.411942   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring networks are active...
	I0723 15:20:42.412743   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network default is active
	I0723 15:20:42.413086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network mk-default-k8s-diff-port-911217 is active
	I0723 15:20:42.413500   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Getting domain xml...
	I0723 15:20:42.414312   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Creating domain...
	I0723 15:20:43.688063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting to get IP...
	I0723 15:20:43.689007   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689403   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689503   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.689396   67258 retry.go:31] will retry after 291.635723ms: waiting for machine to come up
	I0723 15:20:43.982895   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983344   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.983269   67258 retry.go:31] will retry after 315.035251ms: waiting for machine to come up
	I0723 15:20:44.300029   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300502   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300544   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.300453   67258 retry.go:31] will retry after 314.08729ms: waiting for machine to come up
	I0723 15:20:44.615873   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616274   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616299   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.616221   67258 retry.go:31] will retry after 424.738509ms: waiting for machine to come up
	I0723 15:20:45.042987   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043464   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043522   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.043438   67258 retry.go:31] will retry after 711.273362ms: waiting for machine to come up
	I0723 15:20:45.755790   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756332   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756366   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.756261   67258 retry.go:31] will retry after 880.333826ms: waiting for machine to come up
	I0723 15:20:46.638270   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638815   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638859   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:46.638766   67258 retry.go:31] will retry after 733.311982ms: waiting for machine to come up
	I0723 15:20:43.398761   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:43.398790   65177 pod_ready.go:81] duration metric: took 7.505930182s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:43.398803   65177 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:45.406572   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:47.406841   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:48.176598   65605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87658172s)
	I0723 15:20:48.176623   65605 crio.go:469] duration metric: took 2.876682557s to extract the tarball
	I0723 15:20:48.176632   65605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:48.221431   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:48.256729   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:48.256750   65605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:20:48.256833   65605 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.256883   65605 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.256906   65605 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.256840   65605 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.256896   65605 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.256841   65605 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.256851   65605 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:20:48.256858   65605 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258836   65605 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.258855   65605 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.258867   65605 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:20:48.258913   65605 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.258840   65605 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258841   65605 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.258842   65605 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.258906   65605 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.548121   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.552098   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.552418   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.560834   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.580417   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:20:48.590031   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.619770   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.633302   65605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:20:48.633365   65605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.633414   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.660305   65605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:20:48.660383   65605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.660439   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.691792   65605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:20:48.691853   65605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.691902   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707832   65605 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:20:48.707867   65605 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:20:48.707901   65605 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:20:48.707917   65605 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.707945   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707957   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.722912   65605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:20:48.722960   65605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.723012   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729754   65605 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:20:48.729792   65605 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.729820   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.729874   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.729826   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729827   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.730025   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:20:48.730037   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.730113   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.848335   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:20:48.849228   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.849310   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:20:48.858540   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:20:48.858650   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:20:48.858711   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:20:48.858750   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:20:48.889577   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:20:49.134808   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:49.273570   65605 cache_images.go:92] duration metric: took 1.016803126s to LoadCachedImages
	W0723 15:20:49.273670   65605 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0723 15:20:49.273686   65605 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:20:49.273808   65605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:49.273902   65605 ssh_runner.go:195] Run: crio config
	I0723 15:20:49.321968   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:20:49.321995   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:49.322007   65605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:49.322028   65605 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:20:49.322208   65605 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:49.322292   65605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:20:49.332563   65605 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:49.332636   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:49.345174   65605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:20:49.364369   65605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:49.379807   65605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:20:49.396643   65605 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:49.400437   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:49.412291   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:49.539360   65605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:49.556165   65605 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:20:49.556198   65605 certs.go:194] generating shared ca certs ...
	I0723 15:20:49.556218   65605 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:49.556393   65605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:49.556448   65605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:49.556457   65605 certs.go:256] generating profile certs ...
	I0723 15:20:49.556574   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:20:49.556652   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:20:49.556699   65605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:20:49.556845   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:49.556900   65605 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:49.556913   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:49.556947   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:49.557001   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:49.557036   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:49.557087   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:49.557993   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:49.605662   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:49.639122   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:49.665264   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:49.691008   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:20:49.723820   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:20:49.750608   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:49.776942   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:49.809923   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:49.834935   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:49.857389   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:49.880619   65605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:49.897369   65605 ssh_runner.go:195] Run: openssl version
	I0723 15:20:49.902878   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:49.913861   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918296   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918359   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.924159   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:49.936081   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:49.947674   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952040   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952090   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.957714   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:49.969333   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:49.981037   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985257   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985303   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.991083   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:50.002977   65605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:50.007497   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:50.013359   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:50.019202   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:50.025182   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:50.030979   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:50.036818   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:50.042573   65605 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:50.042687   65605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:50.042734   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.084635   65605 cri.go:89] found id: ""
	I0723 15:20:50.084714   65605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:50.096501   65605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:50.096521   65605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:50.096585   65605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:50.107443   65605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:50.108742   65605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:20:50.109665   65605 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-000272" cluster setting kubeconfig missing "old-k8s-version-000272" context setting]
	I0723 15:20:50.111089   65605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:50.178975   65605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:50.190920   65605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0723 15:20:50.190961   65605 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:50.190972   65605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:50.191033   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.230879   65605 cri.go:89] found id: ""
	I0723 15:20:50.230972   65605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:50.247994   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:50.257490   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:50.257518   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:50.257576   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:50.266704   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:50.266763   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:50.276276   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:50.285533   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:50.285613   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:50.294642   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.303358   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:50.303414   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.313060   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:50.322294   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:50.322364   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:50.331659   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:50.341120   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:50.460900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.327126   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.576244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.662730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.762087   65605 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:51.762179   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:47.373536   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374064   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374096   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:47.373991   67258 retry.go:31] will retry after 1.176593909s: waiting for machine to come up
	I0723 15:20:48.552701   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553183   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553216   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:48.553135   67258 retry.go:31] will retry after 1.485919187s: waiting for machine to come up
	I0723 15:20:50.040417   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040886   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:50.040808   67258 retry.go:31] will retry after 2.212005186s: waiting for machine to come up
	I0723 15:20:50.444583   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.905273   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.262683   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.763266   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.263151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.763313   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.262366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.763167   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.263068   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.762864   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.262305   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.762857   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.254679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255094   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:52.255018   67258 retry.go:31] will retry after 2.737596804s: waiting for machine to come up
	I0723 15:20:54.995373   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:54.995633   67258 retry.go:31] will retry after 2.363037622s: waiting for machine to come up
	I0723 15:20:55.405124   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:57.405898   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.767191   64842 start.go:364] duration metric: took 55.07978775s to acquireMachinesLock for "no-preload-543029"
	I0723 15:21:01.767250   64842 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:21:01.767261   64842 fix.go:54] fixHost starting: 
	I0723 15:21:01.767727   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:01.767763   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:01.785721   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0723 15:21:01.786113   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:01.786792   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:01.786819   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:01.787127   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:01.787328   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:01.787485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:01.789046   64842 fix.go:112] recreateIfNeeded on no-preload-543029: state=Stopped err=<nil>
	I0723 15:21:01.789080   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	W0723 15:21:01.789255   64842 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:21:01.791610   64842 out.go:177] * Restarting existing kvm2 VM for "no-preload-543029" ...
	I0723 15:20:57.263221   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.262445   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.762456   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.263288   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.763206   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.263158   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.762517   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.263183   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.762347   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.362159   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362567   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362593   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:57.362539   67258 retry.go:31] will retry after 2.888037123s: waiting for machine to come up
	I0723 15:21:00.253973   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.254583   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Found IP for machine: 192.168.61.64
	I0723 15:21:00.254603   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserving static IP address...
	I0723 15:21:00.254630   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has current primary IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.255048   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserved static IP address: 192.168.61.64
	I0723 15:21:00.255074   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for SSH to be available...
	I0723 15:21:00.255105   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.255130   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | skip adding static IP to network mk-default-k8s-diff-port-911217 - found existing host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"}
	I0723 15:21:00.255145   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Getting to WaitForSSH function...
	I0723 15:21:00.257683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258026   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.258054   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258147   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH client type: external
	I0723 15:21:00.258176   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa (-rw-------)
	I0723 15:21:00.258208   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:00.258220   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | About to run SSH command:
	I0723 15:21:00.258240   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | exit 0
	I0723 15:21:00.382323   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:00.382710   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetConfigRaw
	I0723 15:21:00.383397   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.386258   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.386718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386918   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:21:00.387143   66641 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:00.387164   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:00.387412   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.389494   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389798   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.389824   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389917   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.390082   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390237   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390438   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.390628   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.390842   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.390857   66641 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:00.486433   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:00.486468   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486725   66641 buildroot.go:166] provisioning hostname "default-k8s-diff-port-911217"
	I0723 15:21:00.486750   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486948   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.489770   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490120   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.490149   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490273   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.490475   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490671   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490882   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.491062   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.491230   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.491246   66641 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-911217 && echo "default-k8s-diff-port-911217" | sudo tee /etc/hostname
	I0723 15:21:00.603917   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-911217
	
	I0723 15:21:00.603953   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.606538   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.606898   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.606943   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.607069   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.607306   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607525   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607711   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.607920   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.608129   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.608147   66641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-911217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-911217/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-911217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:00.710852   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:00.710887   66641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:00.710915   66641 buildroot.go:174] setting up certificates
	I0723 15:21:00.710928   66641 provision.go:84] configureAuth start
	I0723 15:21:00.710939   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.711205   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.714141   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714519   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.714552   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714765   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.717395   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.717739   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717939   66641 provision.go:143] copyHostCerts
	I0723 15:21:00.718004   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:00.718020   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:00.718115   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:00.718237   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:00.718250   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:00.718284   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:00.718373   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:00.718401   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:00.718431   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:00.718522   66641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-911217 san=[127.0.0.1 192.168.61.64 default-k8s-diff-port-911217 localhost minikube]
	I0723 15:21:01.133831   66641 provision.go:177] copyRemoteCerts
	I0723 15:21:01.133894   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:01.133919   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.136913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.137359   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.137782   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.137944   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.138115   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.217531   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:01.241478   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0723 15:21:01.265056   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:21:01.287281   66641 provision.go:87] duration metric: took 576.341839ms to configureAuth
	I0723 15:21:01.287317   66641 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:01.287496   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:01.287579   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.290157   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290640   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.290668   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290775   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.290978   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.291509   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.291673   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.291688   66641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:01.540756   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:01.540783   66641 machine.go:97] duration metric: took 1.153625976s to provisionDockerMachine
	I0723 15:21:01.540796   66641 start.go:293] postStartSetup for "default-k8s-diff-port-911217" (driver="kvm2")
	I0723 15:21:01.540809   66641 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:01.540827   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.541189   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:01.541225   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.544068   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544486   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.544511   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544600   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.544788   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.544945   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.545154   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.625316   66641 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:01.629446   66641 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:01.629469   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:01.629529   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:01.629634   66641 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:01.629759   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:01.639896   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:01.663515   66641 start.go:296] duration metric: took 122.707128ms for postStartSetup
	I0723 15:21:01.663551   66641 fix.go:56] duration metric: took 19.276599962s for fixHost
	I0723 15:21:01.663569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.666406   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.666830   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.666861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.667086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.667290   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667487   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.667895   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.668100   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.668116   66641 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:01.767011   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748061.738020629
	
	I0723 15:21:01.767035   66641 fix.go:216] guest clock: 1721748061.738020629
	I0723 15:21:01.767043   66641 fix.go:229] Guest: 2024-07-23 15:21:01.738020629 +0000 UTC Remote: 2024-07-23 15:21:01.66355459 +0000 UTC m=+139.710056956 (delta=74.466039ms)
	I0723 15:21:01.767088   66641 fix.go:200] guest clock delta is within tolerance: 74.466039ms
	I0723 15:21:01.767097   66641 start.go:83] releasing machines lock for "default-k8s-diff-port-911217", held for 19.380180818s
	I0723 15:21:01.767122   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.767446   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:01.770143   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770575   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.770607   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770771   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771336   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771513   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771672   66641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:01.771722   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.771767   66641 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:01.771792   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.774913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775261   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775401   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775440   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775651   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.775783   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775835   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775851   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.775933   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.776044   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776119   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.776196   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.776293   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776455   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.887716   66641 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:01.894935   66641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:59.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.906133   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:02.040633   66641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:02.047908   66641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:02.047982   66641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:02.067565   66641 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:02.067589   66641 start.go:495] detecting cgroup driver to use...
	I0723 15:21:02.067648   66641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:02.083334   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:02.096435   66641 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:02.096501   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:02.109497   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:02.122475   66641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:02.238156   66641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:02.413213   66641 docker.go:233] disabling docker service ...
	I0723 15:21:02.413321   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:02.431076   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:02.443590   66641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:02.565848   66641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:02.708530   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:02.724781   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:02.744261   66641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:21:02.744317   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.755864   66641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:02.755939   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.768381   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.779157   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.789500   66641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:02.801063   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.812845   66641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.828742   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.840605   66641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:02.849796   66641 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:02.849866   66641 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:02.862982   66641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:02.874354   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:03.017881   66641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:03.157623   66641 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:03.157699   66641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:03.162343   66641 start.go:563] Will wait 60s for crictl version
	I0723 15:21:03.162429   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:21:03.166092   66641 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:03.203681   66641 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:03.203775   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.230722   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.257801   66641 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:21:01.793112   64842 main.go:141] libmachine: (no-preload-543029) Calling .Start
	I0723 15:21:01.793305   64842 main.go:141] libmachine: (no-preload-543029) Ensuring networks are active...
	I0723 15:21:01.794004   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network default is active
	I0723 15:21:01.794444   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network mk-no-preload-543029 is active
	I0723 15:21:01.794908   64842 main.go:141] libmachine: (no-preload-543029) Getting domain xml...
	I0723 15:21:01.795563   64842 main.go:141] libmachine: (no-preload-543029) Creating domain...
	I0723 15:21:03.126716   64842 main.go:141] libmachine: (no-preload-543029) Waiting to get IP...
	I0723 15:21:03.127667   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.128113   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.128193   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.128095   67435 retry.go:31] will retry after 265.57265ms: waiting for machine to come up
	I0723 15:21:03.395811   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.396355   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.396382   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.396301   67435 retry.go:31] will retry after 304.545362ms: waiting for machine to come up
	I0723 15:21:03.702841   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.703303   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.703332   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.703241   67435 retry.go:31] will retry after 326.35473ms: waiting for machine to come up
	I0723 15:21:04.032032   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.032670   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.032695   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.032568   67435 retry.go:31] will retry after 515.672537ms: waiting for machine to come up
	I0723 15:21:04.550461   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.550989   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.551019   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.550942   67435 retry.go:31] will retry after 735.237546ms: waiting for machine to come up
	I0723 15:21:05.287672   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.288362   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.288393   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.288259   67435 retry.go:31] will retry after 683.55844ms: waiting for machine to come up
	I0723 15:21:02.262289   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.763009   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.262852   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.763260   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.262964   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.762673   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.263335   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.762790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.262830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.762830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.259168   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:03.262241   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:03.262748   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262930   66641 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:03.266969   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:03.278873   66641 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:03.279019   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:21:03.279076   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.318295   66641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:21:03.318390   66641 ssh_runner.go:195] Run: which lz4
	I0723 15:21:03.322441   66641 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:21:03.326818   66641 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:21:03.326857   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:21:04.624581   66641 crio.go:462] duration metric: took 1.302205276s to copy over tarball
	I0723 15:21:04.624665   66641 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:21:06.913370   66641 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.288673981s)
	I0723 15:21:06.913403   66641 crio.go:469] duration metric: took 2.288793517s to extract the tarball
	I0723 15:21:06.913413   66641 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:21:06.951820   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.906766   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:06.405854   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:05.973409   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.973872   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.973920   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.973856   67435 retry.go:31] will retry after 728.120188ms: waiting for machine to come up
	I0723 15:21:06.703125   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:06.703631   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:06.703661   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:06.703554   67435 retry.go:31] will retry after 1.052851436s: waiting for machine to come up
	I0723 15:21:07.758261   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:07.758823   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:07.758853   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:07.758766   67435 retry.go:31] will retry after 1.533027844s: waiting for machine to come up
	I0723 15:21:09.293721   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:09.294204   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:09.294230   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:09.294169   67435 retry.go:31] will retry after 1.399702148s: waiting for machine to come up
	I0723 15:21:07.262935   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.762473   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.262990   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.262850   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.762245   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.263207   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.762516   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.263298   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.762853   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.993755   66641 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:21:06.993783   66641 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:21:06.993793   66641 kubeadm.go:934] updating node { 192.168.61.64 8444 v1.30.3 crio true true} ...
	I0723 15:21:06.993917   66641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-911217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:06.993994   66641 ssh_runner.go:195] Run: crio config
	I0723 15:21:07.040966   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:07.040991   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:07.041014   66641 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:07.041040   66641 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.64 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-911217 NodeName:default-k8s-diff-port-911217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:07.041222   66641 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.64
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-911217"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:07.041284   66641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:21:07.051498   66641 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:07.051567   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:07.060752   66641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0723 15:21:07.078362   66641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:21:07.093890   66641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0723 15:21:07.121632   66641 ssh_runner.go:195] Run: grep 192.168.61.64	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:07.126674   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:07.139521   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:07.264702   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:07.286475   66641 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217 for IP: 192.168.61.64
	I0723 15:21:07.286499   66641 certs.go:194] generating shared ca certs ...
	I0723 15:21:07.286521   66641 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:07.286750   66641 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:07.286814   66641 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:07.286829   66641 certs.go:256] generating profile certs ...
	I0723 15:21:07.286928   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/client.key
	I0723 15:21:07.286986   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key.a1750142
	I0723 15:21:07.287041   66641 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key
	I0723 15:21:07.287151   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:07.287182   66641 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:07.287191   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:07.287210   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:07.287233   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:07.287257   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:07.287288   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:07.288006   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:07.331680   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:07.378132   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:07.423720   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:07.462077   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 15:21:07.489608   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:21:07.511619   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:07.535480   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:07.557870   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:07.579317   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:07.601107   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:07.622717   66641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:07.638728   66641 ssh_runner.go:195] Run: openssl version
	I0723 15:21:07.644065   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:07.654161   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658261   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658335   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.663893   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:07.673883   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:07.684409   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688657   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688710   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.694037   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:07.704621   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:07.714866   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719090   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719133   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.724797   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:07.734660   66641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:07.739005   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:07.744615   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:07.749912   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:07.755350   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:07.760833   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:07.766701   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:07.773611   66641 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:07.773724   66641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:07.773788   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.812612   66641 cri.go:89] found id: ""
	I0723 15:21:07.812689   66641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:07.822628   66641 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:07.822648   66641 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:07.822699   66641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:07.831812   66641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:07.833459   66641 kubeconfig.go:125] found "default-k8s-diff-port-911217" server: "https://192.168.61.64:8444"
	I0723 15:21:07.836425   66641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:07.846945   66641 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.64
	I0723 15:21:07.846976   66641 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:07.846989   66641 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:07.847046   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.881091   66641 cri.go:89] found id: ""
	I0723 15:21:07.881180   66641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:07.900373   66641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:07.912010   66641 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:07.912035   66641 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:07.912092   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0723 15:21:07.920903   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:07.920981   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:07.930186   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0723 15:21:07.938825   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:07.938891   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:07.947852   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.957007   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:07.957076   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.966642   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0723 15:21:07.975395   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:07.975457   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:07.984363   66641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:07.993997   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:08.112135   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.260639   66641 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1484675s)
	I0723 15:21:09.260677   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.481542   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.546998   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.657302   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:09.657407   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.157632   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.658193   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.694922   66641 api_server.go:72] duration metric: took 1.037619978s to wait for apiserver process to appear ...
	I0723 15:21:10.694957   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:10.694980   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:08.406647   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:10.907117   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:13.783814   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.783855   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:13.783874   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:13.828920   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.828952   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:14.195191   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.199330   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.199350   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:14.695758   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.703433   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.703471   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:15.196096   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:15.200578   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:21:15.208499   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:21:15.208523   66641 api_server.go:131] duration metric: took 4.513559684s to wait for apiserver health ...
	I0723 15:21:15.208532   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:15.208539   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:15.210371   66641 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:10.696028   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:10.696532   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:10.696556   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:10.696480   67435 retry.go:31] will retry after 1.754927597s: waiting for machine to come up
	I0723 15:21:12.452705   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:12.453135   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:12.453164   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:12.453082   67435 retry.go:31] will retry after 2.354607493s: waiting for machine to come up
	I0723 15:21:14.809924   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:14.810438   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:14.810467   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:14.810400   67435 retry.go:31] will retry after 4.422072307s: waiting for machine to come up
	I0723 15:21:12.262754   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.762339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.262358   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.762291   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.262339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.762796   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.263008   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.762225   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.263100   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.762356   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.211787   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:15.226475   66641 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:15.245284   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:15.253756   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:15.253789   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:15.253798   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:15.253805   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:15.253815   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:15.253822   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:15.253828   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:15.253833   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:15.253838   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:15.253844   66641 system_pods.go:74] duration metric: took 8.537438ms to wait for pod list to return data ...
	I0723 15:21:15.253853   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:15.258127   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:15.258153   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:15.258163   66641 node_conditions.go:105] duration metric: took 4.305171ms to run NodePressure ...
	I0723 15:21:15.258177   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:15.533298   66641 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541967   66641 kubeadm.go:739] kubelet initialised
	I0723 15:21:15.541987   66641 kubeadm.go:740] duration metric: took 8.645977ms waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541995   66641 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:15.549557   66641 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.553971   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554002   66641 pod_ready.go:81] duration metric: took 4.418498ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.554013   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554022   66641 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.558017   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558040   66641 pod_ready.go:81] duration metric: took 4.009013ms for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.558050   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558058   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.562197   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562219   66641 pod_ready.go:81] duration metric: took 4.154836ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.562228   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562234   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.649441   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649466   66641 pod_ready.go:81] duration metric: took 87.224782ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.649477   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649484   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.049016   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049052   66641 pod_ready.go:81] duration metric: took 399.56194ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.049063   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049071   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.449193   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449221   66641 pod_ready.go:81] duration metric: took 400.140989ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.449231   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449239   66641 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.849035   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849069   66641 pod_ready.go:81] duration metric: took 399.822211ms for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.849080   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849087   66641 pod_ready.go:38] duration metric: took 1.307085242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:16.849102   66641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:16.860322   66641 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:16.860344   66641 kubeadm.go:597] duration metric: took 9.037689802s to restartPrimaryControlPlane
	I0723 15:21:16.860353   66641 kubeadm.go:394] duration metric: took 9.086749188s to StartCluster
	I0723 15:21:16.860368   66641 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.860445   66641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:16.862706   66641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.863010   66641 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:16.863105   66641 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:16.863162   66641 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863183   66641 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863194   66641 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863201   66641 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:16.863202   66641 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863218   66641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-911217"
	I0723 15:21:16.863225   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863235   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:16.863261   66641 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863272   66641 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:16.863304   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863517   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863547   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863553   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863566   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863584   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863612   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.864773   66641 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:16.866155   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:16.879697   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0723 15:21:16.880186   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.880765   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.880786   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.881122   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.881681   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.881712   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.882675   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0723 15:21:16.883162   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.883709   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.883730   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.883748   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0723 15:21:16.884082   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.884138   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.884609   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.884639   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.884610   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.884699   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.885040   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.885254   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.888611   66641 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.888627   66641 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:16.888651   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.888916   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.888944   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.899013   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0723 15:21:16.899458   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.900188   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.900208   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.900593   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.900786   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.902589   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0723 15:21:16.903091   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.903189   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.904095   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.904118   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.904576   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.904810   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.905242   66641 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:16.905443   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0723 15:21:16.905849   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.906358   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.906375   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.906491   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:16.906512   66641 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:16.906533   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.906766   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.906920   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.907374   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.907409   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.909637   66641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:16.910635   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911126   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.911154   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.911534   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.911683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.911859   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.913408   66641 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:16.913435   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:16.913456   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.916884   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.917338   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917647   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.917896   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.918061   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.918207   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.930880   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0723 15:21:16.931386   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.931925   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.931951   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.932292   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.932495   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.934404   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.934645   66641 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:16.934659   66641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:16.934675   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.937624   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.937991   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.938013   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.938166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.938342   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.938523   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.938695   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:13.407459   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:15.906352   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:17.068411   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:17.084266   66641 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:17.189089   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:17.189118   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:17.205584   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:17.205623   66641 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:17.209103   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:17.224264   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:17.245125   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:17.245152   66641 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:17.272564   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:18.245078   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020778604s)
	I0723 15:21:18.245165   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036025141s)
	I0723 15:21:18.245186   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245195   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245209   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245213   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245201   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245513   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245526   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245543   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245550   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245633   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245648   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245657   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245665   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245682   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245695   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245703   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245723   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245842   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245872   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245903   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245911   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245928   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245932   66641 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-911217"
	I0723 15:21:18.245982   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245987   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.246004   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.251643   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.251660   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.251879   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.251889   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.253737   66641 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:19.235665   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236110   64842 main.go:141] libmachine: (no-preload-543029) Found IP for machine: 192.168.72.227
	I0723 15:21:19.236141   64842 main.go:141] libmachine: (no-preload-543029) Reserving static IP address...
	I0723 15:21:19.236154   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has current primary IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236541   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.236571   64842 main.go:141] libmachine: (no-preload-543029) DBG | skip adding static IP to network mk-no-preload-543029 - found existing host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"}
	I0723 15:21:19.236586   64842 main.go:141] libmachine: (no-preload-543029) Reserved static IP address: 192.168.72.227
	I0723 15:21:19.236601   64842 main.go:141] libmachine: (no-preload-543029) Waiting for SSH to be available...
	I0723 15:21:19.236613   64842 main.go:141] libmachine: (no-preload-543029) DBG | Getting to WaitForSSH function...
	I0723 15:21:19.239149   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239453   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.239481   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239620   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH client type: external
	I0723 15:21:19.239651   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa (-rw-------)
	I0723 15:21:19.239677   64842 main.go:141] libmachine: (no-preload-543029) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:19.239691   64842 main.go:141] libmachine: (no-preload-543029) DBG | About to run SSH command:
	I0723 15:21:19.239700   64842 main.go:141] libmachine: (no-preload-543029) DBG | exit 0
	I0723 15:21:19.366227   64842 main.go:141] libmachine: (no-preload-543029) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:19.366646   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetConfigRaw
	I0723 15:21:19.367309   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.370038   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370401   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.370430   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370756   64842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/config.json ...
	I0723 15:21:19.370949   64842 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:19.370966   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:19.371186   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.373506   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.373912   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.373977   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.374053   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.374259   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374465   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374635   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.374805   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.374996   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.375009   64842 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:19.482523   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:19.482551   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482771   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:21:19.482796   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482975   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.485520   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.485868   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.485898   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.486084   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.486300   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486483   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486634   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.486777   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.486998   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.487019   64842 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-543029 && echo "no-preload-543029" | sudo tee /etc/hostname
	I0723 15:21:19.609064   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-543029
	
	I0723 15:21:19.609100   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.611746   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612087   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.612133   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612276   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.612477   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612663   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612845   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.612979   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.613158   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.613180   64842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-543029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-543029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-543029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:19.731696   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:19.731721   64842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:19.731740   64842 buildroot.go:174] setting up certificates
	I0723 15:21:19.731748   64842 provision.go:84] configureAuth start
	I0723 15:21:19.731755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.732051   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.735016   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.735425   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735608   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.737908   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738267   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.738317   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738482   64842 provision.go:143] copyHostCerts
	I0723 15:21:19.738556   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:19.738571   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:19.738641   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:19.738746   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:19.738755   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:19.738779   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:19.738852   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:19.738866   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:19.738887   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:19.738965   64842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.no-preload-543029 san=[127.0.0.1 192.168.72.227 localhost minikube no-preload-543029]
	I0723 15:21:20.020845   64842 provision.go:177] copyRemoteCerts
	I0723 15:21:20.020921   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:20.020954   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.023907   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024341   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.024363   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024531   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.024799   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.024973   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.025138   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.113238   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:20.136690   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 15:21:20.161178   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:21:20.184741   64842 provision.go:87] duration metric: took 452.982716ms to configureAuth
	I0723 15:21:20.184767   64842 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:20.184992   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:20.185076   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.187893   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188209   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.188235   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188473   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.188684   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.188883   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.189026   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.189181   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.189379   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.189397   64842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:17.263163   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.762332   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.263184   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.762413   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.263050   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.762396   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.263052   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.763027   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.263244   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.762584   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.255042   66641 addons.go:510] duration metric: took 1.391938603s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:19.089229   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:21.587960   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:20.463609   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:20.463657   64842 machine.go:97] duration metric: took 1.092694849s to provisionDockerMachine
	I0723 15:21:20.463670   64842 start.go:293] postStartSetup for "no-preload-543029" (driver="kvm2")
	I0723 15:21:20.463684   64842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:20.463705   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.464063   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:20.464093   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.467027   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.467429   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467606   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.467785   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.467938   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.468096   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.556442   64842 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:20.561477   64842 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:20.561506   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:20.561590   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:20.561694   64842 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:20.561814   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:20.574431   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:20.603531   64842 start.go:296] duration metric: took 139.847057ms for postStartSetup
	I0723 15:21:20.603578   64842 fix.go:56] duration metric: took 18.836315993s for fixHost
	I0723 15:21:20.603644   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.606820   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607184   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.607230   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607410   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.607660   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607851   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607999   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.608191   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.608373   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.608383   64842 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:20.718722   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748080.694505305
	
	I0723 15:21:20.718755   64842 fix.go:216] guest clock: 1721748080.694505305
	I0723 15:21:20.718764   64842 fix.go:229] Guest: 2024-07-23 15:21:20.694505305 +0000 UTC Remote: 2024-07-23 15:21:20.603582679 +0000 UTC m=+365.240688683 (delta=90.922626ms)
	I0723 15:21:20.718796   64842 fix.go:200] guest clock delta is within tolerance: 90.922626ms
	I0723 15:21:20.718801   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 18.9515773s
	I0723 15:21:20.718818   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.719088   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:20.721851   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722269   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.722292   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722527   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723046   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723231   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723328   64842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:20.723377   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.723460   64842 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:20.723485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.726596   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.726987   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727022   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727041   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727142   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727329   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.727475   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727498   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.727510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727638   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727707   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.728003   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.728170   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.728341   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.841462   64842 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:20.847787   64842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:21:20.998310   64842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:21.004048   64842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:21.004125   64842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:21.019676   64842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:21.019699   64842 start.go:495] detecting cgroup driver to use...
	I0723 15:21:21.019773   64842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:21.034888   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:21.049886   64842 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:21.049949   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:21.063974   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:21.077306   64842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:21.195936   64842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:21.355002   64842 docker.go:233] disabling docker service ...
	I0723 15:21:21.355090   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:21.370421   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:21.382910   64842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:21.493040   64842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:21.610670   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:21.623845   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:21.641461   64842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:21:21.641518   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.651025   64842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:21.651096   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.661449   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.671431   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.681681   64842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:21.692696   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.702592   64842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.720041   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.730075   64842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:21.739621   64842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:21.739686   64842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:21.752036   64842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:21.761412   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:21.902842   64842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:22.032458   64842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:22.032545   64842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:22.037229   64842 start.go:563] Will wait 60s for crictl version
	I0723 15:21:22.037309   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.040918   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:22.081102   64842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:22.081203   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.111862   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.140842   64842 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:21:18.404301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:20.406322   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.406365   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.142110   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:22.144996   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145342   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:22.145382   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145651   64842 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:22.149630   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:22.161308   64842 kubeadm.go:883] updating cluster {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:22.161457   64842 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:21:22.161507   64842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:22.196099   64842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0723 15:21:22.196122   64842 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:21:22.196180   64842 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.196197   64842 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.196257   64842 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0723 15:21:22.196270   64842 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.196280   64842 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.196391   64842 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.196430   64842 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.196256   64842 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.197600   64842 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197611   64842 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.197612   64842 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.197603   64842 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.197632   64842 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.197855   64842 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0723 15:21:22.453013   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.456128   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.457426   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.457660   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.468840   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.488855   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0723 15:21:22.498800   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.521182   64842 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0723 15:21:22.521236   64842 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.521282   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.606761   64842 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0723 15:21:22.606814   64842 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.606863   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626104   64842 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0723 15:21:22.626139   64842 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0723 15:21:22.626148   64842 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.626171   64842 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626405   64842 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0723 15:21:22.626436   64842 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.626497   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.739834   64842 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0723 15:21:22.739888   64842 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.739923   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.739972   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.739931   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.740025   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.740028   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.740087   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.754758   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.903466   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0723 15:21:22.903526   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903582   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.903618   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903475   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903669   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903725   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903738   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903808   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0723 15:21:22.903870   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:22.903977   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.904112   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.916856   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0723 15:21:22.916880   64842 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.916927   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.917993   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918778   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918818   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918846   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0723 15:21:22.918919   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0723 15:21:23.126109   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916361   64842 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.790200633s)
	I0723 15:21:24.916416   64842 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0723 15:21:24.916450   64842 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916477   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.999519999s)
	I0723 15:21:24.916501   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:24.916502   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0723 15:21:24.916528   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.916570   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.921489   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.262373   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.762746   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.263229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.763195   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.262446   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.762506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.262490   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.263073   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.762900   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.087763   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:24.588088   66641 node_ready.go:49] node "default-k8s-diff-port-911217" has status "Ready":"True"
	I0723 15:21:24.588115   66641 node_ready.go:38] duration metric: took 7.503814941s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:24.588126   66641 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:24.593658   66641 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598755   66641 pod_ready.go:92] pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:24.598780   66641 pod_ready.go:81] duration metric: took 5.095349ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598792   66641 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:26.605401   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:24.906330   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:26.906460   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:27.393601   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.477002958s)
	I0723 15:21:27.393621   64842 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.472105782s)
	I0723 15:21:27.393640   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0723 15:21:27.393664   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393665   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 15:21:27.393707   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393763   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:29.040178   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.646445558s)
	I0723 15:21:29.040216   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0723 15:21:29.040222   64842 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.64643284s)
	I0723 15:21:29.040248   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0723 15:21:29.040252   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:29.040316   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:27.262530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.762666   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.262506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.762908   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.262943   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.763041   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.263200   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.762855   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.262991   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.605685   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:29.107082   66641 pod_ready.go:92] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.107106   66641 pod_ready.go:81] duration metric: took 4.508306433s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.107117   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112506   66641 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.112529   66641 pod_ready.go:81] duration metric: took 5.405596ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112564   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117710   66641 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.117736   66641 pod_ready.go:81] duration metric: took 5.161856ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117748   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122182   66641 pod_ready.go:92] pod "kube-proxy-d4zwd" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.122207   66641 pod_ready.go:81] duration metric: took 4.450531ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122218   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126407   66641 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.126428   66641 pod_ready.go:81] duration metric: took 4.201792ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126439   66641 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:31.133392   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:28.967873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.404672   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.100302   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.059957757s)
	I0723 15:21:31.100343   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0723 15:21:31.100373   64842 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:31.100425   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:34.291526   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.191073801s)
	I0723 15:21:34.291561   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0723 15:21:34.291588   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:34.291639   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:32.262345   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.762530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.262472   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.763055   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.262344   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.762962   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.262594   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.762498   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.263210   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.763229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.631906   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.632672   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:33.405404   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.906326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.650341   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358679252s)
	I0723 15:21:35.650368   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0723 15:21:35.650412   64842 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:35.650450   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:36.307948   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0723 15:21:36.307992   64842 cache_images.go:123] Successfully loaded all cached images
	I0723 15:21:36.307999   64842 cache_images.go:92] duration metric: took 14.11186471s to LoadCachedImages
	I0723 15:21:36.308012   64842 kubeadm.go:934] updating node { 192.168.72.227 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:21:36.308139   64842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-543029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:36.308223   64842 ssh_runner.go:195] Run: crio config
	I0723 15:21:36.353489   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:36.353510   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:36.353521   64842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:36.353549   64842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-543029 NodeName:no-preload-543029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:36.353706   64842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-543029"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:36.353774   64842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:21:36.363814   64842 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:36.363887   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:36.372484   64842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0723 15:21:36.388450   64842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:21:36.404404   64842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0723 15:21:36.420801   64842 ssh_runner.go:195] Run: grep 192.168.72.227	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:36.424596   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:36.436558   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:36.563903   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:36.580045   64842 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029 for IP: 192.168.72.227
	I0723 15:21:36.580108   64842 certs.go:194] generating shared ca certs ...
	I0723 15:21:36.580133   64842 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:36.580339   64842 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:36.580409   64842 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:36.580423   64842 certs.go:256] generating profile certs ...
	I0723 15:21:36.580538   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.key
	I0723 15:21:36.580633   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key.1fcf66d2
	I0723 15:21:36.580678   64842 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key
	I0723 15:21:36.580818   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:36.580856   64842 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:36.580866   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:36.580899   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:36.580934   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:36.580968   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:36.581017   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:36.581890   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:36.617903   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:36.650101   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:36.690040   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:36.716216   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 15:21:36.740583   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:21:36.764801   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:36.798418   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:36.821594   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:36.843862   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:36.866577   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:36.888178   64842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:36.903980   64842 ssh_runner.go:195] Run: openssl version
	I0723 15:21:36.910344   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:36.920792   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925317   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925372   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.931375   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:36.941782   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:36.952943   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957594   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957643   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.963465   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:36.974471   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:36.984631   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989126   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989180   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.994580   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:37.004372   64842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:37.009492   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:37.016189   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:37.023648   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:37.030369   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:37.036358   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:37.042504   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:37.048396   64842 kubeadm.go:392] StartCluster: {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:37.048473   64842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:37.048542   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.085642   64842 cri.go:89] found id: ""
	I0723 15:21:37.085711   64842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:37.095789   64842 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:37.095809   64842 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:37.095861   64842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:37.105817   64842 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:37.106841   64842 kubeconfig.go:125] found "no-preload-543029" server: "https://192.168.72.227:8443"
	I0723 15:21:37.109115   64842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:37.118333   64842 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.227
	I0723 15:21:37.118365   64842 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:37.118389   64842 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:37.118442   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.160393   64842 cri.go:89] found id: ""
	I0723 15:21:37.160465   64842 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:37.175866   64842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:37.184719   64842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:37.184737   64842 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:37.184796   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:21:37.192836   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:37.192893   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:37.201472   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:21:37.209448   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:37.209509   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:37.217692   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.225746   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:37.225792   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.234312   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:21:37.242796   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:37.242853   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:37.251655   64842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:37.260393   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:37.372906   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.228191   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.438949   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.503088   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.588692   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:38.588787   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.089205   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.589266   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.609653   64842 api_server.go:72] duration metric: took 1.020961559s to wait for apiserver process to appear ...
	I0723 15:21:39.609681   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:39.609703   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:39.610233   64842 api_server.go:269] stopped: https://192.168.72.227:8443/healthz: Get "https://192.168.72.227:8443/healthz": dial tcp 192.168.72.227:8443: connect: connection refused
	I0723 15:21:40.110036   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:37.263268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.763001   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.263263   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.762567   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.262510   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.762366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.263091   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.762546   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.263115   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.762511   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.133459   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.634011   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:38.405042   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.405301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.406499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.755036   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.755081   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:42.755102   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:42.774722   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.774753   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:43.110105   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.114521   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.114549   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:43.610681   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.619976   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.620012   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:44.110574   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:44.117164   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:21:44.125459   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:21:44.125487   64842 api_server.go:131] duration metric: took 4.515798224s to wait for apiserver health ...
	I0723 15:21:44.125500   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:44.125508   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:44.127031   64842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:44.128250   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:44.156441   64842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:44.190002   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:44.202487   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:44.202543   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:44.202558   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:44.202570   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:44.202580   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:44.202597   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:44.202611   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:44.202623   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:44.202635   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:44.202649   64842 system_pods.go:74] duration metric: took 12.618106ms to wait for pod list to return data ...
	I0723 15:21:44.202663   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:44.208561   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:44.208598   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:44.208613   64842 node_conditions.go:105] duration metric: took 5.939597ms to run NodePressure ...
	I0723 15:21:44.208637   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:44.527115   64842 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531381   64842 kubeadm.go:739] kubelet initialised
	I0723 15:21:44.531403   64842 kubeadm.go:740] duration metric: took 4.261609ms waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531410   64842 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:44.536741   64842 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.542345   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542367   64842 pod_ready.go:81] duration metric: took 5.603228ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.542376   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542409   64842 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.547170   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547202   64842 pod_ready.go:81] duration metric: took 4.783034ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.547214   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547223   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.552220   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552239   64842 pod_ready.go:81] duration metric: took 5.010275ms for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.552247   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552252   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.593233   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593263   64842 pod_ready.go:81] duration metric: took 41.002989ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.593275   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593284   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.993527   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993556   64842 pod_ready.go:81] duration metric: took 400.24962ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.993567   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993575   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.393187   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393215   64842 pod_ready.go:81] duration metric: took 399.632229ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.393224   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393230   64842 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.794005   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794039   64842 pod_ready.go:81] duration metric: took 400.798877ms for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.794050   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794061   64842 pod_ready.go:38] duration metric: took 1.262643249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:45.794082   64842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:45.806575   64842 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:45.806604   64842 kubeadm.go:597] duration metric: took 8.710787698s to restartPrimaryControlPlane
	I0723 15:21:45.806616   64842 kubeadm.go:394] duration metric: took 8.758224212s to StartCluster
	I0723 15:21:45.806636   64842 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.806714   64842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:45.808707   64842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.808950   64842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:45.809024   64842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:45.809108   64842 addons.go:69] Setting storage-provisioner=true in profile "no-preload-543029"
	I0723 15:21:45.809121   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:45.809144   64842 addons.go:234] Setting addon storage-provisioner=true in "no-preload-543029"
	I0723 15:21:45.809148   64842 addons.go:69] Setting default-storageclass=true in profile "no-preload-543029"
	I0723 15:21:45.809158   64842 addons.go:69] Setting metrics-server=true in profile "no-preload-543029"
	I0723 15:21:45.809186   64842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-543029"
	I0723 15:21:45.809198   64842 addons.go:234] Setting addon metrics-server=true in "no-preload-543029"
	W0723 15:21:45.809207   64842 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:45.809233   64842 host.go:66] Checking if "no-preload-543029" exists ...
	W0723 15:21:45.809156   64842 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:45.809298   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.809533   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809566   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809615   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809650   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809666   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809694   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.810889   64842 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:45.812166   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:45.825877   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0723 15:21:45.826459   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.826873   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0723 15:21:45.827091   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827122   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.827302   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.827520   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.827785   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827809   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.828045   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.828076   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.828197   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.828404   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.828464   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0723 15:21:45.829160   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.829594   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.829617   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.830024   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.830679   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.830726   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.832633   64842 addons.go:234] Setting addon default-storageclass=true in "no-preload-543029"
	W0723 15:21:45.832654   64842 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:45.832683   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.833024   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.833067   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.848944   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0723 15:21:45.849974   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.850455   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0723 15:21:45.850916   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.850938   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.851135   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.851254   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.851443   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.852354   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.852373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.852472   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0723 15:21:45.852797   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.853534   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.853613   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.853820   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.854337   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.854373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.854866   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.855572   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.855606   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.855642   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.855829   64842 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:45.857645   64842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:45.857658   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:45.857676   64842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:45.857695   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:42.262868   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.762469   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.262898   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.762342   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.262359   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.763149   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.263062   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.763109   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.262592   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.763170   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.132245   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.633648   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.859112   64842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:45.859130   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:45.859146   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.861510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862069   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.862099   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862362   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.862596   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.862842   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.863077   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.863162   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.864192   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.864223   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.864257   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.864446   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.864602   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.864750   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.901172   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0723 15:21:45.901604   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.902073   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.902096   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.902455   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.902711   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.904749   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.905713   64842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:45.905736   64842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:45.905755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.909130   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909598   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.909655   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909882   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.910025   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.910171   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.910413   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:46.014049   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:46.040760   64842 node_ready.go:35] waiting up to 6m0s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:46.115180   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:46.144610   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:46.144632   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:46.164354   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:46.181905   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:46.181929   64842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:46.241734   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:46.241764   64842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:46.267086   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:47.396441   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.281225615s)
	I0723 15:21:47.396460   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232072139s)
	I0723 15:21:47.396498   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396512   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396529   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396544   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129426841s)
	I0723 15:21:47.396591   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396611   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396879   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396894   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396904   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396912   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396927   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396948   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396958   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396973   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.397067   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397093   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397113   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397120   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397310   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397326   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397335   64842 addons.go:475] Verifying addon metrics-server=true in "no-preload-543029"
	I0723 15:21:47.398473   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398488   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.398497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.398504   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.398766   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398788   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.398805   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.420728   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.420747   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.421047   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.421067   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.423038   64842 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:44.409201   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:46.905099   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:47.424285   64842 addons.go:510] duration metric: took 1.615264126s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:48.044800   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:47.262743   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.762500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.262636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.762397   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.262912   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.763274   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.262631   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.762560   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.262984   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.763131   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:51.763218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:51.804139   65605 cri.go:89] found id: ""
	I0723 15:21:51.804167   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.804177   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:51.804185   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:51.804246   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:51.846025   65605 cri.go:89] found id: ""
	I0723 15:21:51.846052   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.846064   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:51.846070   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:51.846133   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:48.132371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.133097   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:49.405318   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.907543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.545198   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:53.045065   64842 node_ready.go:49] node "no-preload-543029" has status "Ready":"True"
	I0723 15:21:53.045092   64842 node_ready.go:38] duration metric: took 7.004300565s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:53.045103   64842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:53.051631   64842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056333   64842 pod_ready.go:92] pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.056391   64842 pod_ready.go:81] duration metric: took 4.723453ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056428   64842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061634   64842 pod_ready.go:92] pod "etcd-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.061654   64842 pod_ready.go:81] duration metric: took 5.217288ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061666   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:55.068882   64842 pod_ready.go:102] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.885398   65605 cri.go:89] found id: ""
	I0723 15:21:51.885431   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.885442   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:51.885450   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:51.885514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:51.919587   65605 cri.go:89] found id: ""
	I0723 15:21:51.919618   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.919630   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:51.919637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:51.919723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:51.955301   65605 cri.go:89] found id: ""
	I0723 15:21:51.955335   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.955342   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:51.955348   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:51.955397   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:51.988318   65605 cri.go:89] found id: ""
	I0723 15:21:51.988345   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.988355   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:51.988362   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:51.988419   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:52.023375   65605 cri.go:89] found id: ""
	I0723 15:21:52.023407   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.023418   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:52.023426   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:52.023498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:52.060183   65605 cri.go:89] found id: ""
	I0723 15:21:52.060205   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.060212   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:52.060221   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:52.060233   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:52.109904   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:52.109937   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:52.123292   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:52.123317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:52.253361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:52.253386   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:52.253401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:52.321684   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:52.321720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:54.859846   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:54.873167   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:54.873233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:54.909330   65605 cri.go:89] found id: ""
	I0723 15:21:54.909351   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.909359   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:54.909364   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:54.909412   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:54.943092   65605 cri.go:89] found id: ""
	I0723 15:21:54.943120   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.943131   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:54.943138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:54.943198   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:54.975051   65605 cri.go:89] found id: ""
	I0723 15:21:54.975080   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.975090   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:54.975098   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:54.975172   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:55.017552   65605 cri.go:89] found id: ""
	I0723 15:21:55.017580   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.017590   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:55.017596   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:55.017657   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:55.067857   65605 cri.go:89] found id: ""
	I0723 15:21:55.067887   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.067897   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:55.067903   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:55.067965   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:55.105194   65605 cri.go:89] found id: ""
	I0723 15:21:55.105224   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.105234   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:55.105242   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:55.105312   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:55.174421   65605 cri.go:89] found id: ""
	I0723 15:21:55.174451   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.174463   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:55.174470   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:55.174521   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:55.209007   65605 cri.go:89] found id: ""
	I0723 15:21:55.209032   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.209039   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:55.209048   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:55.209059   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:55.261075   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:55.261110   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:55.273629   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:55.273656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:55.348214   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:55.348237   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:55.348271   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:55.418341   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:55.418371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:52.134201   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.633089   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.405215   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.405377   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.068263   64842 pod_ready.go:92] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.068285   64842 pod_ready.go:81] duration metric: took 3.006610636s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.068294   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073245   64842 pod_ready.go:92] pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.073267   64842 pod_ready.go:81] duration metric: took 4.962522ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073275   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078816   64842 pod_ready.go:92] pod "kube-proxy-wzbps" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.078835   64842 pod_ready.go:81] duration metric: took 5.554703ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078843   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646678   64842 pod_ready.go:92] pod "kube-scheduler-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.646709   64842 pod_ready.go:81] duration metric: took 567.858812ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646722   64842 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:58.653962   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:57.956565   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:57.969980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:57.970054   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:58.002894   65605 cri.go:89] found id: ""
	I0723 15:21:58.002925   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.002943   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:58.002951   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:58.003018   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:58.034980   65605 cri.go:89] found id: ""
	I0723 15:21:58.035007   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.035017   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:58.035024   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:58.035090   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:58.068666   65605 cri.go:89] found id: ""
	I0723 15:21:58.068694   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.068702   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:58.068708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:58.068757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:58.102693   65605 cri.go:89] found id: ""
	I0723 15:21:58.102727   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.102737   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:58.102744   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:58.102807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:58.137492   65605 cri.go:89] found id: ""
	I0723 15:21:58.137521   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.137530   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:58.137535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:58.137590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:58.173616   65605 cri.go:89] found id: ""
	I0723 15:21:58.173640   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.173647   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:58.173654   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:58.173716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:58.206995   65605 cri.go:89] found id: ""
	I0723 15:21:58.207023   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.207033   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:58.207040   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:58.207100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:58.238476   65605 cri.go:89] found id: ""
	I0723 15:21:58.238504   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.238513   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:58.238525   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:58.238538   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:58.291074   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:58.291104   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:58.305305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:58.305349   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:58.379551   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:58.379572   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:58.379587   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:58.453253   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:58.453293   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:00.994715   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:01.010264   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:01.010359   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:01.065402   65605 cri.go:89] found id: ""
	I0723 15:22:01.065433   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.065443   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:01.065451   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:01.065511   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:01.115626   65605 cri.go:89] found id: ""
	I0723 15:22:01.115655   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.115666   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:01.115675   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:01.115737   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:01.155568   65605 cri.go:89] found id: ""
	I0723 15:22:01.155595   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.155604   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:01.155610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:01.155674   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:01.191076   65605 cri.go:89] found id: ""
	I0723 15:22:01.191102   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.191110   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:01.191116   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:01.191162   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:01.224233   65605 cri.go:89] found id: ""
	I0723 15:22:01.224257   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.224263   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:01.224269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:01.224337   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:01.257321   65605 cri.go:89] found id: ""
	I0723 15:22:01.257344   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.257351   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:01.257357   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:01.257415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:01.289646   65605 cri.go:89] found id: ""
	I0723 15:22:01.289670   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.289678   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:01.289685   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:01.289740   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:01.322672   65605 cri.go:89] found id: ""
	I0723 15:22:01.322703   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.322714   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:01.322725   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:01.322741   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:01.395637   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:01.395674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:01.434548   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:01.434580   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:01.484364   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:01.484396   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:01.497536   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:01.497571   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:01.567570   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:57.132119   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:59.132178   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.134156   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:58.407847   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:00.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.161116   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.658640   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:04.068561   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:04.082660   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:04.082738   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:04.118536   65605 cri.go:89] found id: ""
	I0723 15:22:04.118566   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.118576   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:04.118584   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:04.118642   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:04.154768   65605 cri.go:89] found id: ""
	I0723 15:22:04.154792   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.154802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:04.154809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:04.154854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:04.188426   65605 cri.go:89] found id: ""
	I0723 15:22:04.188456   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.188464   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:04.188469   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:04.188517   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:04.222195   65605 cri.go:89] found id: ""
	I0723 15:22:04.222221   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.222229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:04.222251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:04.222327   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:04.259164   65605 cri.go:89] found id: ""
	I0723 15:22:04.259191   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.259201   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:04.259208   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:04.259275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:04.291500   65605 cri.go:89] found id: ""
	I0723 15:22:04.291527   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.291534   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:04.291541   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:04.291595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:04.326680   65605 cri.go:89] found id: ""
	I0723 15:22:04.326712   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.326722   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:04.326729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:04.326789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:04.358629   65605 cri.go:89] found id: ""
	I0723 15:22:04.358653   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.358662   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:04.358671   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:04.358682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:04.429591   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.429614   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:04.429625   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:04.509841   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:04.509887   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:04.547827   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:04.547852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:04.600857   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:04.600891   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:03.633501   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.633691   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.404413   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.404840   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.405499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:06.153755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:08.653890   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.116541   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:07.129739   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:07.129809   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:07.164541   65605 cri.go:89] found id: ""
	I0723 15:22:07.164573   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.164583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:07.164589   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:07.164651   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:07.202567   65605 cri.go:89] found id: ""
	I0723 15:22:07.202595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.202606   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:07.202613   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:07.202672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:07.238665   65605 cri.go:89] found id: ""
	I0723 15:22:07.238689   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.238698   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:07.238706   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:07.238763   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:07.271216   65605 cri.go:89] found id: ""
	I0723 15:22:07.271246   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.271256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:07.271263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:07.271335   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:07.303566   65605 cri.go:89] found id: ""
	I0723 15:22:07.303595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.303606   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:07.303613   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:07.303672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:07.337927   65605 cri.go:89] found id: ""
	I0723 15:22:07.337951   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.337959   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:07.337965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:07.338023   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:07.373813   65605 cri.go:89] found id: ""
	I0723 15:22:07.373841   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.373852   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:07.373860   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:07.373928   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:07.408301   65605 cri.go:89] found id: ""
	I0723 15:22:07.408326   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.408333   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:07.408340   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:07.408350   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:07.488384   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:07.488417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.531867   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:07.531895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:07.582639   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:07.582671   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.597387   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:07.597413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:07.673185   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.173915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:10.186657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:10.186717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:10.218213   65605 cri.go:89] found id: ""
	I0723 15:22:10.218238   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.218246   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:10.218252   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:10.218302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:10.250199   65605 cri.go:89] found id: ""
	I0723 15:22:10.250228   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.250238   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:10.250245   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:10.250307   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:10.282920   65605 cri.go:89] found id: ""
	I0723 15:22:10.282947   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.282957   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:10.282965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:10.283022   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:10.317334   65605 cri.go:89] found id: ""
	I0723 15:22:10.317363   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.317372   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:10.317380   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:10.317443   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:10.350520   65605 cri.go:89] found id: ""
	I0723 15:22:10.350548   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.350559   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:10.350566   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:10.350630   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:10.381360   65605 cri.go:89] found id: ""
	I0723 15:22:10.381385   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.381392   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:10.381405   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:10.381451   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:10.413202   65605 cri.go:89] found id: ""
	I0723 15:22:10.413231   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.413239   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:10.413244   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:10.413300   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:10.447102   65605 cri.go:89] found id: ""
	I0723 15:22:10.447132   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.447143   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:10.447154   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:10.447168   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:10.496110   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:10.496141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:10.509298   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:10.509331   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:10.578938   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.578960   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:10.578975   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:10.660316   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:10.660346   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.634852   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.635205   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.905326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.906212   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.153941   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.652564   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.199119   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:13.212070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:13.212129   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:13.247646   65605 cri.go:89] found id: ""
	I0723 15:22:13.247683   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.247694   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:13.247701   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:13.247759   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:13.277875   65605 cri.go:89] found id: ""
	I0723 15:22:13.277901   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.277909   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:13.277918   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:13.277973   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:13.311499   65605 cri.go:89] found id: ""
	I0723 15:22:13.311520   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.311527   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:13.311533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:13.311587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:13.342913   65605 cri.go:89] found id: ""
	I0723 15:22:13.342944   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.342955   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:13.342963   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:13.343020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:13.380062   65605 cri.go:89] found id: ""
	I0723 15:22:13.380085   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.380092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:13.380097   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:13.380148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:13.416683   65605 cri.go:89] found id: ""
	I0723 15:22:13.416712   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.416721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:13.416728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:13.416786   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:13.451783   65605 cri.go:89] found id: ""
	I0723 15:22:13.451806   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.451813   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:13.451819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:13.451864   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:13.490456   65605 cri.go:89] found id: ""
	I0723 15:22:13.490488   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.490500   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:13.490512   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:13.490531   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:13.562391   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:13.562419   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:13.562435   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:13.639271   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:13.639330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.677457   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:13.677486   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:13.727877   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:13.727912   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.242569   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:16.255165   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:16.255237   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:16.286884   65605 cri.go:89] found id: ""
	I0723 15:22:16.286973   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.286990   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:16.286998   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:16.287070   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:16.319480   65605 cri.go:89] found id: ""
	I0723 15:22:16.319508   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.319518   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:16.319524   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:16.319590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:16.356142   65605 cri.go:89] found id: ""
	I0723 15:22:16.356176   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.356186   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:16.356193   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:16.356251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:16.393720   65605 cri.go:89] found id: ""
	I0723 15:22:16.393748   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.393756   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:16.393761   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:16.393817   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:16.429752   65605 cri.go:89] found id: ""
	I0723 15:22:16.429788   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.429800   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:16.429807   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:16.429865   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:16.463983   65605 cri.go:89] found id: ""
	I0723 15:22:16.464012   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.464023   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:16.464030   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:16.464099   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:16.497390   65605 cri.go:89] found id: ""
	I0723 15:22:16.497417   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.497428   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:16.497435   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:16.497496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:16.532460   65605 cri.go:89] found id: ""
	I0723 15:22:16.532491   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.532502   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:16.532513   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:16.532525   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:16.584455   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:16.584492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.599205   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:16.599237   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:16.672183   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:16.672207   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:16.672221   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:16.748888   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:16.748923   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:12.132681   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.134314   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.634068   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.404961   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.406911   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:15.652813   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:17.653585   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.654123   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.286407   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:19.300815   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:19.300890   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:19.341088   65605 cri.go:89] found id: ""
	I0723 15:22:19.341122   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.341133   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:19.341140   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:19.341191   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:19.375597   65605 cri.go:89] found id: ""
	I0723 15:22:19.375627   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.375635   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:19.375641   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:19.375689   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:19.412206   65605 cri.go:89] found id: ""
	I0723 15:22:19.412234   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.412244   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:19.412252   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:19.412315   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:19.445598   65605 cri.go:89] found id: ""
	I0723 15:22:19.445631   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.445645   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:19.445653   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:19.445725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:19.477766   65605 cri.go:89] found id: ""
	I0723 15:22:19.477800   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.477811   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:19.477818   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:19.477877   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:19.509935   65605 cri.go:89] found id: ""
	I0723 15:22:19.509965   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.509976   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:19.509982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:19.510039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:19.542906   65605 cri.go:89] found id: ""
	I0723 15:22:19.542936   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.542947   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:19.542954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:19.543010   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:19.575935   65605 cri.go:89] found id: ""
	I0723 15:22:19.575964   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.575975   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:19.576036   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:19.576054   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:19.625640   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:19.625674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:19.638938   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:19.638965   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:19.711019   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:19.711047   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:19.711061   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:19.787744   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:19.787781   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:19.133215   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.632570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:18.905104   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.404733   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.152487   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:24.154220   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.326500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:22.339677   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:22.339741   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:22.374593   65605 cri.go:89] found id: ""
	I0723 15:22:22.374630   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.374641   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:22.374649   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:22.374713   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:22.408064   65605 cri.go:89] found id: ""
	I0723 15:22:22.408089   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.408099   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:22.408106   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:22.408166   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:22.442923   65605 cri.go:89] found id: ""
	I0723 15:22:22.442956   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.442968   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:22.442976   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:22.443038   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:22.476003   65605 cri.go:89] found id: ""
	I0723 15:22:22.476027   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.476036   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:22.476043   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:22.476109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:22.508221   65605 cri.go:89] found id: ""
	I0723 15:22:22.508253   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.508260   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:22.508268   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:22.508328   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:22.540748   65605 cri.go:89] found id: ""
	I0723 15:22:22.540778   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.540789   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:22.540797   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:22.540857   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:22.576000   65605 cri.go:89] found id: ""
	I0723 15:22:22.576028   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.576038   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:22.576044   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:22.576102   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:22.614295   65605 cri.go:89] found id: ""
	I0723 15:22:22.614325   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.614335   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:22.614346   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:22.614361   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:22.627447   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:22.627481   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:22.701142   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:22.701172   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:22.701188   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:22.788487   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:22.788523   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.831107   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:22.831136   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.382886   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:25.396072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:25.396147   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:25.432414   65605 cri.go:89] found id: ""
	I0723 15:22:25.432443   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.432454   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:25.432482   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:25.432554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:25.466375   65605 cri.go:89] found id: ""
	I0723 15:22:25.466421   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.466429   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:25.466434   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:25.466488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:25.502512   65605 cri.go:89] found id: ""
	I0723 15:22:25.502536   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.502545   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:25.502553   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:25.502624   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:25.535953   65605 cri.go:89] found id: ""
	I0723 15:22:25.535975   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.535984   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:25.535991   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:25.536051   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:25.569217   65605 cri.go:89] found id: ""
	I0723 15:22:25.569250   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.569261   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:25.569269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:25.569331   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:25.602317   65605 cri.go:89] found id: ""
	I0723 15:22:25.602341   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.602350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:25.602360   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:25.602433   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:25.636959   65605 cri.go:89] found id: ""
	I0723 15:22:25.636984   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.636994   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:25.637001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:25.637059   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:25.671719   65605 cri.go:89] found id: ""
	I0723 15:22:25.671753   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.671764   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:25.671775   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:25.671789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.720509   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:25.720540   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:25.733097   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:25.733121   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:25.809365   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:25.809393   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:25.809409   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:25.890663   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:25.890700   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:23.634537   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.133073   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:23.905075   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:25.905102   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:27.905390   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.653893   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.660981   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.430884   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:28.444825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:28.444882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:28.477510   65605 cri.go:89] found id: ""
	I0723 15:22:28.477533   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.477540   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:28.477546   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:28.477611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:28.515395   65605 cri.go:89] found id: ""
	I0723 15:22:28.515424   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.515434   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:28.515440   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:28.515498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:28.554144   65605 cri.go:89] found id: ""
	I0723 15:22:28.554169   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.554176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:28.554185   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:28.554239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:28.588756   65605 cri.go:89] found id: ""
	I0723 15:22:28.588783   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.588794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:28.588801   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:28.588861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:28.623278   65605 cri.go:89] found id: ""
	I0723 15:22:28.623305   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.623313   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:28.623318   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:28.623372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:28.666802   65605 cri.go:89] found id: ""
	I0723 15:22:28.666831   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.666840   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:28.666847   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:28.666906   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:28.697712   65605 cri.go:89] found id: ""
	I0723 15:22:28.697736   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.697744   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:28.697749   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:28.697803   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:28.730296   65605 cri.go:89] found id: ""
	I0723 15:22:28.730333   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.730340   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:28.730349   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:28.730360   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.779381   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:28.779417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:28.792687   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:28.792718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:28.859483   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:28.859508   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:28.859537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:28.933792   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:28.933824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.474653   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:31.488537   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:31.488602   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:31.522785   65605 cri.go:89] found id: ""
	I0723 15:22:31.522816   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.522826   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:31.522834   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:31.522901   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:31.554448   65605 cri.go:89] found id: ""
	I0723 15:22:31.554493   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.554503   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:31.554508   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:31.554568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:31.587456   65605 cri.go:89] found id: ""
	I0723 15:22:31.587479   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.587486   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:31.587492   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:31.587549   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:31.625604   65605 cri.go:89] found id: ""
	I0723 15:22:31.625632   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.625640   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:31.625646   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:31.625696   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:31.661266   65605 cri.go:89] found id: ""
	I0723 15:22:31.661298   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.661304   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:31.661309   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:31.661364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:31.696942   65605 cri.go:89] found id: ""
	I0723 15:22:31.696974   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.696984   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:31.696992   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:31.697055   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:31.730706   65605 cri.go:89] found id: ""
	I0723 15:22:31.730730   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.730738   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:31.730743   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:31.730789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:31.762778   65605 cri.go:89] found id: ""
	I0723 15:22:31.762802   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.762810   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:31.762818   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:31.762829   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.804789   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:31.804814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:30.133732   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:29.906482   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:32.404579   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.152594   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:33.154059   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.854481   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:31.854514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:31.867003   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:31.867028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:31.942544   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:31.942565   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:31.942576   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.519437   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:34.531879   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:34.531941   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:34.565547   65605 cri.go:89] found id: ""
	I0723 15:22:34.565572   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.565580   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:34.565585   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:34.565634   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:34.597865   65605 cri.go:89] found id: ""
	I0723 15:22:34.597892   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.597902   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:34.597908   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:34.597968   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:34.633153   65605 cri.go:89] found id: ""
	I0723 15:22:34.633176   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.633185   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:34.633192   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:34.633251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:34.668464   65605 cri.go:89] found id: ""
	I0723 15:22:34.668486   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.668496   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:34.668502   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:34.668573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:34.700358   65605 cri.go:89] found id: ""
	I0723 15:22:34.700401   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.700412   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:34.700422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:34.700495   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:34.731774   65605 cri.go:89] found id: ""
	I0723 15:22:34.731807   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.731819   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:34.731828   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:34.731902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:34.764204   65605 cri.go:89] found id: ""
	I0723 15:22:34.764232   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.764243   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:34.764251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:34.764311   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:34.794103   65605 cri.go:89] found id: ""
	I0723 15:22:34.794131   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.794139   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:34.794149   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:34.794165   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:34.868038   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:34.868063   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:34.868076   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.958254   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:34.958291   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:35.004649   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:35.004681   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:35.055496   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:35.055537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:32.632017   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.634515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.405341   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:36.905094   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:35.652935   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.654130   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:40.153533   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.569938   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:37.582561   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:37.582629   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:37.613053   65605 cri.go:89] found id: ""
	I0723 15:22:37.613081   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.613090   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:37.613096   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:37.613161   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:37.649239   65605 cri.go:89] found id: ""
	I0723 15:22:37.649270   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.649279   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:37.649286   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:37.649372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:37.685110   65605 cri.go:89] found id: ""
	I0723 15:22:37.685137   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.685145   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:37.685150   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:37.685201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:37.718210   65605 cri.go:89] found id: ""
	I0723 15:22:37.718231   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.718239   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:37.718245   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:37.718297   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:37.751192   65605 cri.go:89] found id: ""
	I0723 15:22:37.751224   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.751234   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:37.751241   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:37.751294   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:37.781569   65605 cri.go:89] found id: ""
	I0723 15:22:37.781597   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.781607   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:37.781614   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:37.781680   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:37.812886   65605 cri.go:89] found id: ""
	I0723 15:22:37.812916   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.812927   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:37.812934   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:37.812994   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:37.844065   65605 cri.go:89] found id: ""
	I0723 15:22:37.844094   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.844104   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:37.844114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:37.844128   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.857216   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:37.857244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:37.926781   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:37.926807   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:37.926824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:38.007510   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:38.007544   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:38.045404   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:38.045437   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:40.594590   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:40.607099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:40.607157   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:40.660888   65605 cri.go:89] found id: ""
	I0723 15:22:40.660915   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.660926   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:40.660933   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:40.660992   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:40.698276   65605 cri.go:89] found id: ""
	I0723 15:22:40.698302   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.698310   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:40.698317   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:40.698411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:40.733515   65605 cri.go:89] found id: ""
	I0723 15:22:40.733542   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.733552   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:40.733560   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:40.733619   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:40.765501   65605 cri.go:89] found id: ""
	I0723 15:22:40.765530   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.765541   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:40.765548   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:40.765600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:40.800660   65605 cri.go:89] found id: ""
	I0723 15:22:40.800686   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.800693   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:40.800698   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:40.800744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:40.836084   65605 cri.go:89] found id: ""
	I0723 15:22:40.836111   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.836119   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:40.836125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:40.836179   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:40.872567   65605 cri.go:89] found id: ""
	I0723 15:22:40.872593   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.872601   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:40.872607   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:40.872665   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:40.907965   65605 cri.go:89] found id: ""
	I0723 15:22:40.907995   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.908006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:40.908017   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:40.908032   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:40.977078   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:40.977105   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:40.977124   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:41.059589   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:41.059634   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:41.097934   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:41.097968   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:41.151322   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:41.151365   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.133207   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.133345   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.633631   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.407087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.904675   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:42.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:44.653650   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.665956   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:43.678808   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:43.678882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:43.711311   65605 cri.go:89] found id: ""
	I0723 15:22:43.711346   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.711356   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:43.711363   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:43.711415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:43.745203   65605 cri.go:89] found id: ""
	I0723 15:22:43.745226   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.745233   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:43.745239   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:43.745303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:43.778815   65605 cri.go:89] found id: ""
	I0723 15:22:43.778851   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.778861   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:43.778868   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:43.778926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:43.812497   65605 cri.go:89] found id: ""
	I0723 15:22:43.812528   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.812538   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:43.812544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:43.812595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:43.849568   65605 cri.go:89] found id: ""
	I0723 15:22:43.849595   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.849607   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:43.849621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:43.849784   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:43.883486   65605 cri.go:89] found id: ""
	I0723 15:22:43.883515   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.883527   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:43.883535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:43.883603   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:43.917301   65605 cri.go:89] found id: ""
	I0723 15:22:43.917321   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.917328   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:43.917333   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:43.917388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:43.951808   65605 cri.go:89] found id: ""
	I0723 15:22:43.951835   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.951844   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:43.951853   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:43.951864   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:44.001416   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:44.001448   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:44.014680   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:44.014708   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:44.086008   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:44.086033   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:44.086048   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:44.174647   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:44.174679   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:46.716916   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:46.730403   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:46.730473   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:46.765297   65605 cri.go:89] found id: ""
	I0723 15:22:46.765332   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.765348   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:46.765355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:46.765417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:46.798193   65605 cri.go:89] found id: ""
	I0723 15:22:46.798225   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.798235   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:46.798242   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:46.798309   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:46.830977   65605 cri.go:89] found id: ""
	I0723 15:22:46.831003   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.831015   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:46.831022   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:46.831093   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:44.135515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.633440   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.404399   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.655329   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.660172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.867414   65605 cri.go:89] found id: ""
	I0723 15:22:46.867441   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.867452   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:46.867459   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:46.867524   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:46.903782   65605 cri.go:89] found id: ""
	I0723 15:22:46.903810   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.903823   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:46.903830   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:46.903912   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:46.936451   65605 cri.go:89] found id: ""
	I0723 15:22:46.936479   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.936486   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:46.936491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:46.936538   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:46.970263   65605 cri.go:89] found id: ""
	I0723 15:22:46.970289   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.970297   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:46.970302   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:46.970370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:47.005023   65605 cri.go:89] found id: ""
	I0723 15:22:47.005055   65605 logs.go:276] 0 containers: []
	W0723 15:22:47.005065   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:47.005074   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:47.005087   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:47.102350   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:47.102398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:47.102432   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:47.194243   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:47.194277   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:47.235510   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:47.235543   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:47.285177   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:47.285208   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:49.799825   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:49.813159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:49.813218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:49.844937   65605 cri.go:89] found id: ""
	I0723 15:22:49.844966   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.844974   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:49.844979   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:49.845039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:49.880236   65605 cri.go:89] found id: ""
	I0723 15:22:49.880265   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.880276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:49.880283   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:49.880344   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:49.914260   65605 cri.go:89] found id: ""
	I0723 15:22:49.914289   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.914298   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:49.914306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:49.914360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:49.948948   65605 cri.go:89] found id: ""
	I0723 15:22:49.948979   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.948987   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:49.948994   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:49.949049   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:49.982841   65605 cri.go:89] found id: ""
	I0723 15:22:49.982867   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.982876   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:49.982881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:49.982926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:50.018255   65605 cri.go:89] found id: ""
	I0723 15:22:50.018286   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.018297   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:50.018315   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:50.018366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:50.054476   65605 cri.go:89] found id: ""
	I0723 15:22:50.054505   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.054515   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:50.054521   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:50.054582   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:50.088017   65605 cri.go:89] found id: ""
	I0723 15:22:50.088050   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.088060   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:50.088072   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:50.088086   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:50.140460   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:50.140494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:50.155334   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:50.155371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:50.230361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:50.230401   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:50.230419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:50.307742   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:50.307789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:48.635238   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.133390   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.406535   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:50.904921   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.905910   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.152686   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:53.153547   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.847520   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:52.868334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:52.868400   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:52.905903   65605 cri.go:89] found id: ""
	I0723 15:22:52.905930   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.905941   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:52.905948   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:52.906006   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:52.940644   65605 cri.go:89] found id: ""
	I0723 15:22:52.940672   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.940683   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:52.940690   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:52.940752   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:52.973581   65605 cri.go:89] found id: ""
	I0723 15:22:52.973607   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.973615   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:52.973621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:52.973682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:53.007004   65605 cri.go:89] found id: ""
	I0723 15:22:53.007032   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.007040   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:53.007046   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:53.007100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:53.040346   65605 cri.go:89] found id: ""
	I0723 15:22:53.040374   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.040385   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:53.040392   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:53.040455   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:53.073620   65605 cri.go:89] found id: ""
	I0723 15:22:53.073653   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.073662   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:53.073668   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:53.073717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:53.108895   65605 cri.go:89] found id: ""
	I0723 15:22:53.108929   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.108941   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:53.108949   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:53.109014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:53.144145   65605 cri.go:89] found id: ""
	I0723 15:22:53.144171   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.144179   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:53.144190   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:53.144207   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:53.181580   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:53.181617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:53.235261   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:53.235292   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:53.249317   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:53.249352   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:53.317382   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:53.317403   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:53.317419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:55.899766   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:55.913612   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:55.913685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:55.945832   65605 cri.go:89] found id: ""
	I0723 15:22:55.945865   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.945877   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:55.945884   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:55.945939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:55.977489   65605 cri.go:89] found id: ""
	I0723 15:22:55.977522   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.977533   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:55.977546   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:55.977607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:56.011727   65605 cri.go:89] found id: ""
	I0723 15:22:56.011758   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.011770   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:56.011781   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:56.011850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:56.044046   65605 cri.go:89] found id: ""
	I0723 15:22:56.044076   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.044086   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:56.044093   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:56.044148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:56.078615   65605 cri.go:89] found id: ""
	I0723 15:22:56.078638   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.078644   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:56.078649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:56.078702   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:56.112720   65605 cri.go:89] found id: ""
	I0723 15:22:56.112746   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.112754   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:56.112759   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:56.112807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:56.146436   65605 cri.go:89] found id: ""
	I0723 15:22:56.146464   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.146475   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:56.146483   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:56.146545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:56.179819   65605 cri.go:89] found id: ""
	I0723 15:22:56.179850   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.179859   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:56.179868   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:56.179885   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:56.219608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:56.219636   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:56.268158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:56.268192   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:56.281422   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:56.281449   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:56.351169   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:56.351190   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:56.351206   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:53.133444   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.632360   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.404787   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.905423   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.652504   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.655049   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:58.933585   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:58.946516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:58.946607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:58.980970   65605 cri.go:89] found id: ""
	I0723 15:22:58.980994   65605 logs.go:276] 0 containers: []
	W0723 15:22:58.981004   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:58.981012   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:58.981083   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:59.019301   65605 cri.go:89] found id: ""
	I0723 15:22:59.019337   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.019352   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:59.019360   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:59.019417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:59.053653   65605 cri.go:89] found id: ""
	I0723 15:22:59.053677   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.053685   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:59.053690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:59.053745   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:59.086737   65605 cri.go:89] found id: ""
	I0723 15:22:59.086764   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.086772   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:59.086778   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:59.086833   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:59.120689   65605 cri.go:89] found id: ""
	I0723 15:22:59.120717   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.120725   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:59.120731   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:59.120793   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:59.157267   65605 cri.go:89] found id: ""
	I0723 15:22:59.157305   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.157313   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:59.157319   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:59.157370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:59.193432   65605 cri.go:89] found id: ""
	I0723 15:22:59.193457   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.193468   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:59.193474   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:59.193518   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:59.227501   65605 cri.go:89] found id: ""
	I0723 15:22:59.227528   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.227535   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:59.227544   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:59.227555   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:59.314420   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:59.314465   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:59.354311   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:59.354354   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:59.406158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:59.406189   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:59.419244   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:59.419270   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:59.494399   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:57.632469   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:00.133084   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.905483   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.406340   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.154105   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.655454   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:01.995403   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:02.008395   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:02.008459   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:02.041952   65605 cri.go:89] found id: ""
	I0723 15:23:02.041979   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.041989   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:02.041995   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:02.042061   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:02.079353   65605 cri.go:89] found id: ""
	I0723 15:23:02.079383   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.079390   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:02.079397   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:02.079453   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:02.114222   65605 cri.go:89] found id: ""
	I0723 15:23:02.114251   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.114261   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:02.114269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:02.114350   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:02.146563   65605 cri.go:89] found id: ""
	I0723 15:23:02.146591   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.146603   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:02.146610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:02.146675   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:02.184401   65605 cri.go:89] found id: ""
	I0723 15:23:02.184428   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.184436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:02.184442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:02.184489   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:02.221304   65605 cri.go:89] found id: ""
	I0723 15:23:02.221339   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.221350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:02.221358   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:02.221424   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:02.266255   65605 cri.go:89] found id: ""
	I0723 15:23:02.266280   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.266288   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:02.266308   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:02.266364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:02.302038   65605 cri.go:89] found id: ""
	I0723 15:23:02.302064   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.302075   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:02.302085   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:02.302102   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.352709   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:02.352743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:02.366113   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:02.366141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:02.433621   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:02.433658   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:02.433674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:02.512443   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:02.512479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.051227   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:05.063634   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:05.063704   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:05.099833   65605 cri.go:89] found id: ""
	I0723 15:23:05.099862   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.099872   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:05.099880   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:05.099942   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:05.136009   65605 cri.go:89] found id: ""
	I0723 15:23:05.136030   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.136036   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:05.136042   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:05.136089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:05.171390   65605 cri.go:89] found id: ""
	I0723 15:23:05.171423   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.171434   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:05.171441   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:05.171497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:05.210193   65605 cri.go:89] found id: ""
	I0723 15:23:05.210220   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.210229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:05.210236   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:05.210318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:05.243266   65605 cri.go:89] found id: ""
	I0723 15:23:05.243290   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.243298   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:05.243304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:05.243368   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:05.273795   65605 cri.go:89] found id: ""
	I0723 15:23:05.273826   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.273835   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:05.273842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:05.273918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:05.305498   65605 cri.go:89] found id: ""
	I0723 15:23:05.305521   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.305528   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:05.305533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:05.305587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:05.337867   65605 cri.go:89] found id: ""
	I0723 15:23:05.337894   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.337905   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:05.337917   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:05.337934   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:05.353531   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:05.353564   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:05.419605   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:05.419630   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:05.419644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:05.503361   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:05.503395   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.539514   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:05.539547   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.633357   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.633516   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.904960   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.913789   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.657437   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.660064   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.091151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:08.103930   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:08.104007   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:08.136853   65605 cri.go:89] found id: ""
	I0723 15:23:08.136874   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.136881   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:08.136887   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:08.136940   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:08.171525   65605 cri.go:89] found id: ""
	I0723 15:23:08.171556   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.171577   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:08.171584   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:08.171652   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:08.205887   65605 cri.go:89] found id: ""
	I0723 15:23:08.205919   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.205930   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:08.205940   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:08.206001   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:08.238304   65605 cri.go:89] found id: ""
	I0723 15:23:08.238329   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.238337   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:08.238342   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:08.238411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:08.270162   65605 cri.go:89] found id: ""
	I0723 15:23:08.270194   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.270203   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:08.270211   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:08.270273   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:08.312963   65605 cri.go:89] found id: ""
	I0723 15:23:08.312991   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.312999   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:08.313005   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:08.313065   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:08.345211   65605 cri.go:89] found id: ""
	I0723 15:23:08.345246   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.345258   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:08.345267   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:08.345326   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:08.381355   65605 cri.go:89] found id: ""
	I0723 15:23:08.381390   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.381399   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:08.381409   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:08.381421   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.436680   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:08.436718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:08.450210   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:08.450245   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:08.517469   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:08.517490   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:08.517504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:08.603147   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:08.603185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:11.142363   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:11.158204   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:11.158278   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:11.197181   65605 cri.go:89] found id: ""
	I0723 15:23:11.197211   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.197227   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:11.197234   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:11.197302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:11.232698   65605 cri.go:89] found id: ""
	I0723 15:23:11.232726   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.232736   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:11.232742   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:11.232801   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:11.263268   65605 cri.go:89] found id: ""
	I0723 15:23:11.263293   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.263301   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:11.263306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:11.263363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:11.294213   65605 cri.go:89] found id: ""
	I0723 15:23:11.294242   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.294254   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:11.294261   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:11.294340   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:11.324721   65605 cri.go:89] found id: ""
	I0723 15:23:11.324753   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.324766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:11.324773   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:11.324834   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:11.356563   65605 cri.go:89] found id: ""
	I0723 15:23:11.356595   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.356606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:11.356620   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:11.356685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:11.387818   65605 cri.go:89] found id: ""
	I0723 15:23:11.387850   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.387859   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:11.387866   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:11.387926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:11.422612   65605 cri.go:89] found id: ""
	I0723 15:23:11.422639   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.422649   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:11.422659   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:11.422672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:11.475997   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:11.476028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:11.489064   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:11.489095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:11.557384   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:11.557408   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:11.557427   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:11.636906   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:11.636933   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:07.134834   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.636699   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.405125   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.406702   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.153281   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.153390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.154674   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.176790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:14.190898   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:14.190972   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:14.225264   65605 cri.go:89] found id: ""
	I0723 15:23:14.225297   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.225308   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:14.225314   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:14.225378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:14.257092   65605 cri.go:89] found id: ""
	I0723 15:23:14.257119   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.257132   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:14.257138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:14.257201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:14.291068   65605 cri.go:89] found id: ""
	I0723 15:23:14.291095   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.291104   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:14.291111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:14.291170   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:14.324840   65605 cri.go:89] found id: ""
	I0723 15:23:14.324872   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.324881   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:14.324888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:14.324948   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:14.358228   65605 cri.go:89] found id: ""
	I0723 15:23:14.358258   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.358268   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:14.358275   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:14.358333   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:14.389136   65605 cri.go:89] found id: ""
	I0723 15:23:14.389164   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.389174   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:14.389181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:14.389241   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:14.424386   65605 cri.go:89] found id: ""
	I0723 15:23:14.424413   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.424424   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:14.424432   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:14.424492   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:14.457206   65605 cri.go:89] found id: ""
	I0723 15:23:14.457234   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.457244   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:14.457254   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:14.457265   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:14.535708   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:14.535742   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.573579   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:14.573603   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:14.627966   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:14.627994   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:14.641305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:14.641332   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:14.723499   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:12.133966   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.633521   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:16.633785   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.905045   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.653465   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:19.653755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.224268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:17.236467   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:17.236530   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:17.269668   65605 cri.go:89] found id: ""
	I0723 15:23:17.269697   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.269704   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:17.269709   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:17.269753   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:17.300573   65605 cri.go:89] found id: ""
	I0723 15:23:17.300596   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.300603   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:17.300608   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:17.300655   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:17.332627   65605 cri.go:89] found id: ""
	I0723 15:23:17.332653   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.332661   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:17.332666   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:17.332716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:17.363759   65605 cri.go:89] found id: ""
	I0723 15:23:17.363786   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.363794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:17.363799   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:17.363854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:17.396986   65605 cri.go:89] found id: ""
	I0723 15:23:17.397016   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.397023   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:17.397031   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:17.397089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:17.435454   65605 cri.go:89] found id: ""
	I0723 15:23:17.435478   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.435488   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:17.435495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:17.435551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:17.469529   65605 cri.go:89] found id: ""
	I0723 15:23:17.469570   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.469581   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:17.469589   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:17.469654   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:17.505356   65605 cri.go:89] found id: ""
	I0723 15:23:17.505384   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.505395   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:17.505405   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:17.505420   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:17.548656   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:17.548682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:17.602439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:17.602471   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:17.614872   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:17.614902   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:17.684914   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.684939   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:17.684958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.271384   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:20.284619   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:20.284682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:20.319522   65605 cri.go:89] found id: ""
	I0723 15:23:20.319545   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.319552   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:20.319557   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:20.319608   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:20.357359   65605 cri.go:89] found id: ""
	I0723 15:23:20.357385   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.357393   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:20.357399   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:20.357444   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:20.390651   65605 cri.go:89] found id: ""
	I0723 15:23:20.390680   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.390692   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:20.390699   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:20.390757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:20.425243   65605 cri.go:89] found id: ""
	I0723 15:23:20.425274   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.425288   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:20.425295   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:20.425367   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:20.459665   65605 cri.go:89] found id: ""
	I0723 15:23:20.459687   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.459694   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:20.459700   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:20.459749   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:20.494836   65605 cri.go:89] found id: ""
	I0723 15:23:20.494869   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.494879   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:20.494887   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:20.494946   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:20.528807   65605 cri.go:89] found id: ""
	I0723 15:23:20.528839   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.528847   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:20.528854   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:20.528904   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:20.563111   65605 cri.go:89] found id: ""
	I0723 15:23:20.563139   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.563148   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:20.563160   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:20.563175   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:20.576259   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:20.576290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:20.641528   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:20.641551   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:20.641565   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.717413   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:20.717452   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:20.756832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:20.756858   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:19.133570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:21.133680   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:18.404406   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:20.405712   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.904785   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.153273   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:24.654959   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:23.308839   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:23.322122   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:23.322203   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:23.353454   65605 cri.go:89] found id: ""
	I0723 15:23:23.353483   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.353491   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:23.353496   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:23.353550   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:23.385194   65605 cri.go:89] found id: ""
	I0723 15:23:23.385218   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.385226   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:23.385231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:23.385286   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:23.420259   65605 cri.go:89] found id: ""
	I0723 15:23:23.420287   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.420295   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:23.420301   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:23.420366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:23.453107   65605 cri.go:89] found id: ""
	I0723 15:23:23.453134   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.453145   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:23.453152   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:23.453208   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:23.485147   65605 cri.go:89] found id: ""
	I0723 15:23:23.485178   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.485185   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:23.485191   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:23.485239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:23.516682   65605 cri.go:89] found id: ""
	I0723 15:23:23.516709   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.516721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:23.516729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:23.516855   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:23.552804   65605 cri.go:89] found id: ""
	I0723 15:23:23.552836   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.552846   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:23.552853   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:23.552916   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:23.585951   65605 cri.go:89] found id: ""
	I0723 15:23:23.585977   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.585988   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:23.586000   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:23.586014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.641439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:23.641469   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:23.655213   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:23.655243   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:23.726461   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:23.726482   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:23.726496   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:23.806530   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:23.806572   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.346727   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:26.359785   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:26.359854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:26.394547   65605 cri.go:89] found id: ""
	I0723 15:23:26.394583   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.394593   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:26.394600   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:26.394660   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:26.429602   65605 cri.go:89] found id: ""
	I0723 15:23:26.429632   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.429640   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:26.429646   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:26.429735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:26.461875   65605 cri.go:89] found id: ""
	I0723 15:23:26.461902   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.461909   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:26.461916   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:26.461987   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:26.494721   65605 cri.go:89] found id: ""
	I0723 15:23:26.494743   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.494751   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:26.494756   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:26.494802   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:26.530828   65605 cri.go:89] found id: ""
	I0723 15:23:26.530854   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.530863   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:26.530871   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:26.530939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:26.564508   65605 cri.go:89] found id: ""
	I0723 15:23:26.564540   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.564551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:26.564558   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:26.564618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:26.599354   65605 cri.go:89] found id: ""
	I0723 15:23:26.599378   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.599387   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:26.599393   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:26.599460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:26.654360   65605 cri.go:89] found id: ""
	I0723 15:23:26.654409   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.654420   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:26.654429   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:26.654446   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:26.722180   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:26.722212   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:26.722226   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:26.803291   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:26.803324   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.842829   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:26.842860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.633887   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:25.406139   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:27.905699   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.656334   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:29.153898   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.896814   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:26.896854   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.411463   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:29.424509   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:29.424574   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:29.458014   65605 cri.go:89] found id: ""
	I0723 15:23:29.458042   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.458049   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:29.458055   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:29.458108   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:29.492762   65605 cri.go:89] found id: ""
	I0723 15:23:29.492792   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.492802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:29.492809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:29.492862   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:29.526807   65605 cri.go:89] found id: ""
	I0723 15:23:29.526840   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.526851   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:29.526858   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:29.526922   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:29.560110   65605 cri.go:89] found id: ""
	I0723 15:23:29.560133   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.560140   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:29.560146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:29.560195   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:29.596287   65605 cri.go:89] found id: ""
	I0723 15:23:29.596317   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.596327   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:29.596334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:29.596389   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:29.629292   65605 cri.go:89] found id: ""
	I0723 15:23:29.629338   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.629345   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:29.629353   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:29.629404   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:29.666018   65605 cri.go:89] found id: ""
	I0723 15:23:29.666048   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.666058   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:29.666065   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:29.666131   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:29.699967   65605 cri.go:89] found id: ""
	I0723 15:23:29.699996   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.700006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:29.700018   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:29.700034   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:29.749759   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:29.749792   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.763116   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:29.763142   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:29.836309   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:29.836332   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:29.836343   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:29.916337   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:29.916371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:28.633677   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.132726   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:30.405168   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.905063   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.653297   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:33.653432   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.463927   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:32.477072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:32.477150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:32.509915   65605 cri.go:89] found id: ""
	I0723 15:23:32.509938   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.509945   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:32.509952   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:32.510000   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:32.543302   65605 cri.go:89] found id: ""
	I0723 15:23:32.543344   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.543360   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:32.543368   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:32.543438   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:32.579516   65605 cri.go:89] found id: ""
	I0723 15:23:32.579544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.579555   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:32.579562   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:32.579621   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:32.613175   65605 cri.go:89] found id: ""
	I0723 15:23:32.613210   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.613218   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:32.613224   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:32.613282   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:32.646801   65605 cri.go:89] found id: ""
	I0723 15:23:32.646826   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.646835   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:32.646842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:32.646902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:32.683518   65605 cri.go:89] found id: ""
	I0723 15:23:32.683544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.683551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:32.683556   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:32.683611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:32.719448   65605 cri.go:89] found id: ""
	I0723 15:23:32.719475   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.719485   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:32.719490   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:32.719568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:32.752706   65605 cri.go:89] found id: ""
	I0723 15:23:32.752731   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.752738   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:32.752747   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:32.752757   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.800191   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:32.800220   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:32.850990   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:32.851025   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:32.863700   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:32.863729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:32.928054   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:32.928080   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:32.928095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:35.507452   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:35.520681   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:35.520760   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:35.554642   65605 cri.go:89] found id: ""
	I0723 15:23:35.554668   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.554680   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:35.554687   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:35.554750   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:35.585970   65605 cri.go:89] found id: ""
	I0723 15:23:35.585994   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.586004   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:35.586011   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:35.586069   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:35.625178   65605 cri.go:89] found id: ""
	I0723 15:23:35.625202   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.625212   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:35.625226   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:35.625274   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:35.658618   65605 cri.go:89] found id: ""
	I0723 15:23:35.658647   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.658666   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:35.658682   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:35.658742   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:35.696724   65605 cri.go:89] found id: ""
	I0723 15:23:35.696760   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.696768   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:35.696774   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:35.696825   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:35.728399   65605 cri.go:89] found id: ""
	I0723 15:23:35.728426   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.728435   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:35.728440   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:35.728496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:35.758374   65605 cri.go:89] found id: ""
	I0723 15:23:35.758419   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.758429   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:35.758436   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:35.758497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:35.789013   65605 cri.go:89] found id: ""
	I0723 15:23:35.789041   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.789050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:35.789058   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:35.789069   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:35.843703   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:35.843739   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:35.856489   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:35.856514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:35.926784   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:35.926804   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:35.926819   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:36.009552   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:36.009591   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:33.632247   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.633037   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.404984   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:37.905720   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.653742   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.154008   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.545830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:38.560412   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:38.560491   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:38.596495   65605 cri.go:89] found id: ""
	I0723 15:23:38.596521   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.596532   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:38.596538   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:38.596587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:38.635068   65605 cri.go:89] found id: ""
	I0723 15:23:38.635095   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.635104   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:38.635109   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:38.635180   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:38.675832   65605 cri.go:89] found id: ""
	I0723 15:23:38.675876   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.675891   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:38.675897   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:38.675956   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:38.711052   65605 cri.go:89] found id: ""
	I0723 15:23:38.711080   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.711100   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:38.711108   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:38.711171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:38.749437   65605 cri.go:89] found id: ""
	I0723 15:23:38.749479   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.749490   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:38.749498   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:38.749554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:38.790721   65605 cri.go:89] found id: ""
	I0723 15:23:38.790743   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.790751   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:38.790758   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:38.790818   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:38.840127   65605 cri.go:89] found id: ""
	I0723 15:23:38.840156   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.840167   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:38.840174   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:38.840233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:38.895252   65605 cri.go:89] found id: ""
	I0723 15:23:38.895281   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.895291   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:38.895301   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:38.895317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.933441   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:38.933479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:38.987128   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:38.987160   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:39.001547   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:39.001578   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:39.070363   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:39.070398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:39.070413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:41.648668   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:41.664247   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:41.664303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:41.697926   65605 cri.go:89] found id: ""
	I0723 15:23:41.697954   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.697962   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:41.697967   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:41.698014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:41.735306   65605 cri.go:89] found id: ""
	I0723 15:23:41.735336   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.735347   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:41.735355   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:41.735413   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:41.773005   65605 cri.go:89] found id: ""
	I0723 15:23:41.773030   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.773040   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:41.773047   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:41.773105   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:41.806683   65605 cri.go:89] found id: ""
	I0723 15:23:41.806711   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.806722   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:41.806729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:41.806779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:41.842021   65605 cri.go:89] found id: ""
	I0723 15:23:41.842047   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.842063   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:41.842070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:41.842130   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:37.633918   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.132895   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:39.906489   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.405244   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.652778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.656127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:45.155065   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:41.874772   65605 cri.go:89] found id: ""
	I0723 15:23:41.874802   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.874812   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:41.874819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:41.874883   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:41.908618   65605 cri.go:89] found id: ""
	I0723 15:23:41.908643   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.908651   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:41.908656   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:41.908705   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:41.942529   65605 cri.go:89] found id: ""
	I0723 15:23:41.942562   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.942573   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:41.942586   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:41.942601   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:41.995763   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:41.995820   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:42.009263   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:42.009290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:42.076948   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:42.076970   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:42.076989   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:42.157399   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:42.157442   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:44.699439   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:44.712779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:44.712850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:44.746666   65605 cri.go:89] found id: ""
	I0723 15:23:44.746692   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.746701   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:44.746713   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:44.746775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:44.780144   65605 cri.go:89] found id: ""
	I0723 15:23:44.780171   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.780178   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:44.780184   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:44.780240   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:44.816646   65605 cri.go:89] found id: ""
	I0723 15:23:44.816676   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.816688   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:44.816696   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:44.816830   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:44.848830   65605 cri.go:89] found id: ""
	I0723 15:23:44.848860   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.848873   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:44.848880   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:44.848945   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:44.882216   65605 cri.go:89] found id: ""
	I0723 15:23:44.882252   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.882265   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:44.882274   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:44.882363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:44.915894   65605 cri.go:89] found id: ""
	I0723 15:23:44.915921   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.915930   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:44.915937   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:44.916003   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:44.948902   65605 cri.go:89] found id: ""
	I0723 15:23:44.948936   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.948954   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:44.948964   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:44.949034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:44.981658   65605 cri.go:89] found id: ""
	I0723 15:23:44.981685   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.981698   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:44.981709   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:44.981724   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:45.034030   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:45.034063   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:45.047545   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:45.047577   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:45.113885   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:45.113905   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:45.113917   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:45.195865   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:45.195907   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:42.133464   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.633278   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.633730   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.406233   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.904918   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.156318   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:49.653208   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.740466   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:47.752890   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:47.752958   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:47.786124   65605 cri.go:89] found id: ""
	I0723 15:23:47.786149   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.786157   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:47.786162   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:47.786211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:47.818051   65605 cri.go:89] found id: ""
	I0723 15:23:47.818073   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.818081   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:47.818086   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:47.818134   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:47.854144   65605 cri.go:89] found id: ""
	I0723 15:23:47.854168   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.854176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:47.854181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:47.854226   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:47.885781   65605 cri.go:89] found id: ""
	I0723 15:23:47.885809   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.885819   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:47.885826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:47.885888   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:47.917809   65605 cri.go:89] found id: ""
	I0723 15:23:47.917840   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.917850   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:47.917857   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:47.917921   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:47.950041   65605 cri.go:89] found id: ""
	I0723 15:23:47.950069   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.950078   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:47.950085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:47.950145   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:47.983108   65605 cri.go:89] found id: ""
	I0723 15:23:47.983143   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.983154   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:47.983163   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:47.983232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:48.014560   65605 cri.go:89] found id: ""
	I0723 15:23:48.014604   65605 logs.go:276] 0 containers: []
	W0723 15:23:48.014612   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:48.014621   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:48.014638   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:48.027469   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:48.027494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:48.097571   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:48.097601   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:48.097615   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:48.178586   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:48.178618   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:48.215769   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:48.215794   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:50.768087   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:50.781396   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:50.781467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:50.817297   65605 cri.go:89] found id: ""
	I0723 15:23:50.817327   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.817335   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:50.817341   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:50.817388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:50.850439   65605 cri.go:89] found id: ""
	I0723 15:23:50.850467   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.850476   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:50.850483   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:50.850552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:50.884601   65605 cri.go:89] found id: ""
	I0723 15:23:50.884630   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.884641   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:50.884649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:50.884714   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:50.918971   65605 cri.go:89] found id: ""
	I0723 15:23:50.918996   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.919004   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:50.919010   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:50.919072   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:50.951244   65605 cri.go:89] found id: ""
	I0723 15:23:50.951277   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.951284   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:50.951290   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:50.951360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:50.983289   65605 cri.go:89] found id: ""
	I0723 15:23:50.983326   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.983334   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:50.983339   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:50.983392   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:51.019584   65605 cri.go:89] found id: ""
	I0723 15:23:51.019614   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.019624   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:51.019631   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:51.019693   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:51.050981   65605 cri.go:89] found id: ""
	I0723 15:23:51.051005   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.051014   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:51.051023   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:51.051038   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:51.088826   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:51.088852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:51.141369   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:51.141401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:51.155419   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:51.155450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:51.222640   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:51.222662   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:51.222675   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:49.133154   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.632559   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:48.905876   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.404543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.654814   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:54.153611   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.802706   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:53.815926   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:53.815985   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:53.847867   65605 cri.go:89] found id: ""
	I0723 15:23:53.847900   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.847913   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:53.847921   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:53.847981   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:53.881461   65605 cri.go:89] found id: ""
	I0723 15:23:53.881489   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.881499   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:53.881506   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:53.881569   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:53.921025   65605 cri.go:89] found id: ""
	I0723 15:23:53.921059   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.921070   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:53.921076   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:53.921135   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:53.955219   65605 cri.go:89] found id: ""
	I0723 15:23:53.955242   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.955250   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:53.955255   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:53.955318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:53.991874   65605 cri.go:89] found id: ""
	I0723 15:23:53.991905   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.991915   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:53.991922   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:53.991986   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:54.024702   65605 cri.go:89] found id: ""
	I0723 15:23:54.024735   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.024745   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:54.024752   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:54.024819   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:54.063778   65605 cri.go:89] found id: ""
	I0723 15:23:54.063801   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.063808   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:54.063813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:54.063861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:54.098194   65605 cri.go:89] found id: ""
	I0723 15:23:54.098222   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.098232   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:54.098244   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:54.098258   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:54.148576   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:54.148617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:54.162561   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:54.162596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:54.236614   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:54.236647   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:54.236663   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:54.315900   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:54.315932   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:53.632910   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.633683   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.404873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.904545   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:57.904874   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.153719   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:58.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.853674   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:56.867190   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:56.867270   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:56.901757   65605 cri.go:89] found id: ""
	I0723 15:23:56.901782   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.901792   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:56.901799   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:56.901858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:56.943877   65605 cri.go:89] found id: ""
	I0723 15:23:56.943909   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.943920   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:56.943926   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:56.943983   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:56.977156   65605 cri.go:89] found id: ""
	I0723 15:23:56.977186   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.977194   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:56.977200   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:56.977260   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:57.009251   65605 cri.go:89] found id: ""
	I0723 15:23:57.009280   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.009290   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:57.009297   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:57.009362   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:57.041196   65605 cri.go:89] found id: ""
	I0723 15:23:57.041225   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.041236   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:57.041243   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:57.041295   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:57.081725   65605 cri.go:89] found id: ""
	I0723 15:23:57.081752   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.081760   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:57.081765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:57.081810   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:57.114457   65605 cri.go:89] found id: ""
	I0723 15:23:57.114482   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.114490   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:57.114495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:57.114551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:57.149775   65605 cri.go:89] found id: ""
	I0723 15:23:57.149803   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.149814   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:57.149824   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:57.149838   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:57.197984   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:57.198014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:57.210717   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:57.210743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:57.271374   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:57.271392   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:57.271403   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:57.346151   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:57.346185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:59.882368   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:59.895184   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:59.895257   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:59.928859   65605 cri.go:89] found id: ""
	I0723 15:23:59.928891   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.928902   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:59.928909   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:59.928967   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:59.962441   65605 cri.go:89] found id: ""
	I0723 15:23:59.962472   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.962483   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:59.962491   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:59.962570   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:59.996637   65605 cri.go:89] found id: ""
	I0723 15:23:59.996659   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.996667   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:59.996672   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:59.996720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:00.029291   65605 cri.go:89] found id: ""
	I0723 15:24:00.029320   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.029330   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:00.029338   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:00.029387   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:00.060869   65605 cri.go:89] found id: ""
	I0723 15:24:00.060898   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.060907   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:00.060912   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:00.060993   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:00.092010   65605 cri.go:89] found id: ""
	I0723 15:24:00.092042   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.092054   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:00.092063   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:00.092128   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:00.124914   65605 cri.go:89] found id: ""
	I0723 15:24:00.124940   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.124949   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:00.124955   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:00.125016   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:00.159927   65605 cri.go:89] found id: ""
	I0723 15:24:00.159953   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.159962   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:00.159977   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:00.159993   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:00.209719   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:00.209764   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:00.224757   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:00.224784   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:00.292079   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:00.292100   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:00.292113   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:00.377382   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:00.377415   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:58.132374   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.133083   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:59.906087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.404839   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.655745   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.658870   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.153217   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.916818   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:02.931524   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:02.931594   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:02.966440   65605 cri.go:89] found id: ""
	I0723 15:24:02.966462   65605 logs.go:276] 0 containers: []
	W0723 15:24:02.966470   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:02.966475   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:02.966525   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:03.000833   65605 cri.go:89] found id: ""
	I0723 15:24:03.000857   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.000865   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:03.000870   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:03.000918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:03.035531   65605 cri.go:89] found id: ""
	I0723 15:24:03.035559   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.035570   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:03.035577   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:03.035636   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:03.068376   65605 cri.go:89] found id: ""
	I0723 15:24:03.068401   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.068411   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:03.068418   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:03.068479   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:03.102499   65605 cri.go:89] found id: ""
	I0723 15:24:03.102532   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.102543   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:03.102549   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:03.102600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:03.137173   65605 cri.go:89] found id: ""
	I0723 15:24:03.137198   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.137207   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:03.137215   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:03.137259   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:03.170652   65605 cri.go:89] found id: ""
	I0723 15:24:03.170677   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.170685   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:03.170690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:03.170748   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:03.204828   65605 cri.go:89] found id: ""
	I0723 15:24:03.204855   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.204864   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:03.204875   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:03.204895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:03.287370   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:03.287413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:03.323855   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:03.323888   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:03.379809   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:03.379846   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:03.392944   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:03.392971   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:03.465681   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:05.966635   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:05.979888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:05.979949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:06.013706   65605 cri.go:89] found id: ""
	I0723 15:24:06.013733   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.013740   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:06.013746   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:06.013794   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:06.046584   65605 cri.go:89] found id: ""
	I0723 15:24:06.046612   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.046622   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:06.046630   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:06.046690   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:06.077379   65605 cri.go:89] found id: ""
	I0723 15:24:06.077407   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.077416   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:06.077422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:06.077488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:06.108946   65605 cri.go:89] found id: ""
	I0723 15:24:06.108975   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.108986   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:06.108993   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:06.109058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:06.143082   65605 cri.go:89] found id: ""
	I0723 15:24:06.143115   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.143123   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:06.143129   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:06.143178   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:06.182735   65605 cri.go:89] found id: ""
	I0723 15:24:06.182762   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.182772   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:06.182779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:06.182839   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:06.217613   65605 cri.go:89] found id: ""
	I0723 15:24:06.217640   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.217650   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:06.217657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:06.217720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:06.252739   65605 cri.go:89] found id: ""
	I0723 15:24:06.252775   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.252787   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:06.252800   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:06.252814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:06.304325   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:06.304358   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:06.317426   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:06.317450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:06.384284   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:06.384313   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:06.384329   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:06.460936   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:06.460974   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:02.632839   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.132547   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:04.404942   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:06.406131   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:07.153476   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.154627   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.000304   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:09.013544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:09.013618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:09.046414   65605 cri.go:89] found id: ""
	I0723 15:24:09.046442   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.046452   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:09.046459   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:09.046522   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:09.083183   65605 cri.go:89] found id: ""
	I0723 15:24:09.083214   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.083225   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:09.083231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:09.083292   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:09.117524   65605 cri.go:89] found id: ""
	I0723 15:24:09.117568   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.117578   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:09.117585   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:09.117647   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:09.152624   65605 cri.go:89] found id: ""
	I0723 15:24:09.152652   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.152667   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:09.152674   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:09.152735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:09.186918   65605 cri.go:89] found id: ""
	I0723 15:24:09.186943   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.186951   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:09.186957   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:09.187017   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:09.219857   65605 cri.go:89] found id: ""
	I0723 15:24:09.219889   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.219909   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:09.219917   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:09.219980   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:09.253364   65605 cri.go:89] found id: ""
	I0723 15:24:09.253392   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.253402   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:09.253409   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:09.253469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:09.285049   65605 cri.go:89] found id: ""
	I0723 15:24:09.285072   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.285079   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:09.285088   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:09.285099   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:09.336011   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:09.336046   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:09.349643   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:09.349672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:09.428156   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:09.428181   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:09.428200   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:09.513917   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:09.513977   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:07.632840   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.636373   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:08.904674   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.405130   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.653749   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.153549   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:12.053554   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:12.067177   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:12.067242   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:12.097265   65605 cri.go:89] found id: ""
	I0723 15:24:12.097289   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.097298   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:12.097305   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:12.097378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:12.129832   65605 cri.go:89] found id: ""
	I0723 15:24:12.129858   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.129868   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:12.129876   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:12.129938   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:12.164173   65605 cri.go:89] found id: ""
	I0723 15:24:12.164202   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.164213   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:12.164221   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:12.164275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:12.196604   65605 cri.go:89] found id: ""
	I0723 15:24:12.196637   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.196648   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:12.196655   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:12.196725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:12.239120   65605 cri.go:89] found id: ""
	I0723 15:24:12.239149   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.239158   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:12.239164   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:12.239232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:12.273806   65605 cri.go:89] found id: ""
	I0723 15:24:12.273836   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.273847   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:12.273855   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:12.273908   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:12.305937   65605 cri.go:89] found id: ""
	I0723 15:24:12.305965   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.305976   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:12.305984   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:12.306045   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:12.337795   65605 cri.go:89] found id: ""
	I0723 15:24:12.337822   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.337830   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:12.337839   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:12.337850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:12.390476   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:12.390512   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:12.405397   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:12.405422   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:12.474687   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:12.474711   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:12.474730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:12.551302   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:12.551341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:15.094530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:15.108194   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:15.108267   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:15.141068   65605 cri.go:89] found id: ""
	I0723 15:24:15.141095   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.141103   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:15.141109   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:15.141167   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:15.176226   65605 cri.go:89] found id: ""
	I0723 15:24:15.176260   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.176276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:15.176284   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:15.176348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:15.209086   65605 cri.go:89] found id: ""
	I0723 15:24:15.209115   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.209123   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:15.209128   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:15.209175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:15.245808   65605 cri.go:89] found id: ""
	I0723 15:24:15.245842   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.245853   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:15.245863   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:15.245926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:15.277680   65605 cri.go:89] found id: ""
	I0723 15:24:15.277710   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.277720   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:15.277728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:15.277789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:15.308419   65605 cri.go:89] found id: ""
	I0723 15:24:15.308443   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.308450   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:15.308456   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:15.308515   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:15.340785   65605 cri.go:89] found id: ""
	I0723 15:24:15.340812   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.340820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:15.340825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:15.340871   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:15.376014   65605 cri.go:89] found id: ""
	I0723 15:24:15.376040   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.376050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:15.376061   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:15.376074   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:15.427672   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:15.427706   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:15.441726   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:15.441755   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:15.508628   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:15.508659   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:15.508674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:15.589246   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:15.589284   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:12.133283   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.632399   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:13.905548   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.405913   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.652810   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.653725   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.128036   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:18.141529   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:18.141604   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:18.176401   65605 cri.go:89] found id: ""
	I0723 15:24:18.176434   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.176446   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:18.176453   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:18.176507   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:18.209833   65605 cri.go:89] found id: ""
	I0723 15:24:18.209868   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.209878   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:18.209886   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:18.209949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:18.243094   65605 cri.go:89] found id: ""
	I0723 15:24:18.243129   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.243139   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:18.243146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:18.243211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:18.275929   65605 cri.go:89] found id: ""
	I0723 15:24:18.275957   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.275968   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:18.275980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:18.276037   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:18.309064   65605 cri.go:89] found id: ""
	I0723 15:24:18.309095   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.309103   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:18.309109   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:18.309171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:18.345446   65605 cri.go:89] found id: ""
	I0723 15:24:18.345475   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.345485   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:18.345491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:18.345552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:18.381774   65605 cri.go:89] found id: ""
	I0723 15:24:18.381808   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.381820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:18.381827   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:18.381881   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:18.435663   65605 cri.go:89] found id: ""
	I0723 15:24:18.435692   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.435706   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:18.435716   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:18.435729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.471152   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:18.471184   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:18.523114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:18.523146   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:18.536555   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:18.536594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:18.607773   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:18.607792   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:18.607803   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.192781   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:21.205337   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:21.205403   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:21.242125   65605 cri.go:89] found id: ""
	I0723 15:24:21.242155   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.242163   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:21.242170   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:21.242243   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:21.279245   65605 cri.go:89] found id: ""
	I0723 15:24:21.279274   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.279286   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:21.279295   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:21.279361   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:21.311316   65605 cri.go:89] found id: ""
	I0723 15:24:21.311340   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.311348   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:21.311355   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:21.311415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:21.344444   65605 cri.go:89] found id: ""
	I0723 15:24:21.344468   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.344478   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:21.344485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:21.344545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:21.381055   65605 cri.go:89] found id: ""
	I0723 15:24:21.381082   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.381092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:21.381099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:21.381158   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:21.416593   65605 cri.go:89] found id: ""
	I0723 15:24:21.416621   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.416633   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:21.416643   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:21.416706   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:21.448345   65605 cri.go:89] found id: ""
	I0723 15:24:21.448368   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.448377   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:21.448382   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:21.448426   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:21.481810   65605 cri.go:89] found id: ""
	I0723 15:24:21.481836   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.481843   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:21.481852   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:21.481874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:21.545200   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:21.545227   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:21.545244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.626037   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:21.626073   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:21.667961   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:21.667998   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:21.718622   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:21.718662   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:17.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:19.632774   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.632954   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.905257   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:20.906323   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.153330   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.153495   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:24.233086   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:24.247111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:24.247175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:24.281818   65605 cri.go:89] found id: ""
	I0723 15:24:24.281850   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.281861   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:24.281868   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:24.281924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:24.315621   65605 cri.go:89] found id: ""
	I0723 15:24:24.315647   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.315656   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:24.315664   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:24.315722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:24.350355   65605 cri.go:89] found id: ""
	I0723 15:24:24.350400   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.350410   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:24.350417   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:24.350498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:24.384584   65605 cri.go:89] found id: ""
	I0723 15:24:24.384611   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.384619   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:24.384625   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:24.384671   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:24.423669   65605 cri.go:89] found id: ""
	I0723 15:24:24.423694   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.423701   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:24.423707   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:24.423754   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:24.456572   65605 cri.go:89] found id: ""
	I0723 15:24:24.456599   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.456606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:24.456611   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:24.456659   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:24.488024   65605 cri.go:89] found id: ""
	I0723 15:24:24.488047   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.488055   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:24.488061   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:24.488109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:24.519311   65605 cri.go:89] found id: ""
	I0723 15:24:24.519344   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.519352   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:24.519360   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:24.519371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:24.568552   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:24.568594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.581845   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:24.581874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:24.650455   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:24.650478   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:24.650492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:24.728143   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:24.728179   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:23.633012   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:26.132417   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.405046   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.906015   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.654555   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.152778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.268112   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:27.281947   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:27.282025   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:27.315489   65605 cri.go:89] found id: ""
	I0723 15:24:27.315517   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.315528   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:27.315536   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:27.315599   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:27.348481   65605 cri.go:89] found id: ""
	I0723 15:24:27.348509   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.348519   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:27.348526   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:27.348580   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:27.380628   65605 cri.go:89] found id: ""
	I0723 15:24:27.380659   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.380668   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:27.380673   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:27.380731   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:27.413647   65605 cri.go:89] found id: ""
	I0723 15:24:27.413679   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.413688   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:27.413693   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:27.413744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:27.450398   65605 cri.go:89] found id: ""
	I0723 15:24:27.450425   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.450436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:27.450442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:27.450494   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:27.489071   65605 cri.go:89] found id: ""
	I0723 15:24:27.489101   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.489117   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:27.489125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:27.489190   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:27.529785   65605 cri.go:89] found id: ""
	I0723 15:24:27.529813   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.529823   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:27.529829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:27.529876   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:27.560811   65605 cri.go:89] found id: ""
	I0723 15:24:27.560843   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.560855   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:27.560866   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:27.560882   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:27.574078   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:27.574100   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:27.636153   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:27.636179   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:27.636194   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:27.714001   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:27.714041   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.751396   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:27.751428   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.307581   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:30.319762   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:30.319823   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:30.354317   65605 cri.go:89] found id: ""
	I0723 15:24:30.354341   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.354349   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:30.354355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:30.354429   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:30.389994   65605 cri.go:89] found id: ""
	I0723 15:24:30.390026   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.390039   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:30.390048   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:30.390122   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:30.428854   65605 cri.go:89] found id: ""
	I0723 15:24:30.428878   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.428887   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:30.428893   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:30.428966   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:30.461727   65605 cri.go:89] found id: ""
	I0723 15:24:30.461752   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.461759   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:30.461765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:30.461813   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:30.494777   65605 cri.go:89] found id: ""
	I0723 15:24:30.494799   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.494807   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:30.494813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:30.494858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:30.531918   65605 cri.go:89] found id: ""
	I0723 15:24:30.531943   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.531954   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:30.531960   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:30.532034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:30.590683   65605 cri.go:89] found id: ""
	I0723 15:24:30.590710   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.590720   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:30.590727   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:30.590772   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:30.636073   65605 cri.go:89] found id: ""
	I0723 15:24:30.636104   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.636114   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:30.636124   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:30.636138   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.686233   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:30.686268   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:30.700266   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:30.700308   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:30.773850   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:30.773868   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:30.773879   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:30.854428   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:30.854464   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:28.633061   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.633604   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:28.404488   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.406038   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.905405   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.653390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.153739   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:33.393374   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:33.406722   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:33.406779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:33.440555   65605 cri.go:89] found id: ""
	I0723 15:24:33.440585   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.440596   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:33.440604   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:33.440666   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:33.473363   65605 cri.go:89] found id: ""
	I0723 15:24:33.473389   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.473398   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:33.473405   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:33.473469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:33.509772   65605 cri.go:89] found id: ""
	I0723 15:24:33.509805   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.509816   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:33.509829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:33.509896   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:33.546578   65605 cri.go:89] found id: ""
	I0723 15:24:33.546605   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.546613   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:33.546618   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:33.546686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:33.582735   65605 cri.go:89] found id: ""
	I0723 15:24:33.582759   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.582766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:33.582771   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:33.582831   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:33.619013   65605 cri.go:89] found id: ""
	I0723 15:24:33.619039   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.619048   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:33.619053   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:33.619110   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:33.655967   65605 cri.go:89] found id: ""
	I0723 15:24:33.655988   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.655995   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:33.656001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:33.656058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:33.694266   65605 cri.go:89] found id: ""
	I0723 15:24:33.694303   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.694311   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:33.694319   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:33.694330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:33.744464   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:33.744504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:33.759314   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:33.759342   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:33.832308   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:33.832331   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:33.832364   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:33.910820   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:33.910860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.452804   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:36.465137   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:36.465224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:36.504340   65605 cri.go:89] found id: ""
	I0723 15:24:36.504371   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.504380   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:36.504385   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:36.504436   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:36.539113   65605 cri.go:89] found id: ""
	I0723 15:24:36.539138   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.539147   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:36.539154   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:36.539215   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:36.572443   65605 cri.go:89] found id: ""
	I0723 15:24:36.572468   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.572478   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:36.572485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:36.572540   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:36.605366   65605 cri.go:89] found id: ""
	I0723 15:24:36.605391   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.605398   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:36.605404   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:36.605467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:36.637467   65605 cri.go:89] found id: ""
	I0723 15:24:36.637496   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.637506   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:36.637513   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:36.637576   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:36.674630   65605 cri.go:89] found id: ""
	I0723 15:24:36.674652   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.674661   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:36.674669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:36.674722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:36.707409   65605 cri.go:89] found id: ""
	I0723 15:24:36.707500   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.707511   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:36.707525   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:36.707581   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:36.742746   65605 cri.go:89] found id: ""
	I0723 15:24:36.742771   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.742778   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:36.742786   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:36.742800   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.776474   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:36.776498   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:36.826256   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:36.826289   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:36.839568   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:36.839596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:24:33.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.632486   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.405071   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.406177   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.653785   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.654028   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	W0723 15:24:36.906055   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:36.906082   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:36.906095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:39.483791   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:39.496085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:39.496150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:39.527545   65605 cri.go:89] found id: ""
	I0723 15:24:39.527573   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.527583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:39.527590   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:39.527653   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:39.562024   65605 cri.go:89] found id: ""
	I0723 15:24:39.562051   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.562060   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:39.562066   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:39.562115   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:39.600294   65605 cri.go:89] found id: ""
	I0723 15:24:39.600317   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.600324   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:39.600329   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:39.600378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:39.635629   65605 cri.go:89] found id: ""
	I0723 15:24:39.635653   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.635663   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:39.635669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:39.635729   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:39.672815   65605 cri.go:89] found id: ""
	I0723 15:24:39.672843   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.672854   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:39.672861   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:39.672924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:39.705965   65605 cri.go:89] found id: ""
	I0723 15:24:39.705999   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.706009   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:39.706023   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:39.706077   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:39.739262   65605 cri.go:89] found id: ""
	I0723 15:24:39.739288   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.739298   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:39.739304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:39.739373   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:39.771786   65605 cri.go:89] found id: ""
	I0723 15:24:39.771811   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.771820   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:39.771831   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:39.771844   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:39.813596   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:39.813628   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:39.861596   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:39.861629   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:39.875843   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:39.875867   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:39.947917   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:39.947941   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:39.947958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:38.135033   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:40.633462   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.906043   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.404845   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.153505   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:44.154094   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.530636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:42.543636   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:42.543718   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:42.576613   65605 cri.go:89] found id: ""
	I0723 15:24:42.576642   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.576652   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:42.576659   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:42.576723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:42.611422   65605 cri.go:89] found id: ""
	I0723 15:24:42.611452   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.611460   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:42.611465   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:42.611514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:42.647346   65605 cri.go:89] found id: ""
	I0723 15:24:42.647370   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.647380   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:42.647386   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:42.647447   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:42.683587   65605 cri.go:89] found id: ""
	I0723 15:24:42.683614   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.683622   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:42.683627   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:42.683673   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:42.715688   65605 cri.go:89] found id: ""
	I0723 15:24:42.715709   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.715717   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:42.715723   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:42.715775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:42.749589   65605 cri.go:89] found id: ""
	I0723 15:24:42.749624   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.749632   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:42.749637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:42.749684   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:42.786668   65605 cri.go:89] found id: ""
	I0723 15:24:42.786694   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.786702   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:42.786708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:42.786757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:42.821541   65605 cri.go:89] found id: ""
	I0723 15:24:42.821574   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.821585   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:42.821597   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:42.821612   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:42.873689   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:42.873720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:42.886689   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:42.886719   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:42.958057   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:42.958078   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:42.958093   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:43.042738   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:43.042771   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:45.580764   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:45.593331   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:45.593402   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:45.632356   65605 cri.go:89] found id: ""
	I0723 15:24:45.632386   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.632397   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:45.632404   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:45.632460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:45.674319   65605 cri.go:89] found id: ""
	I0723 15:24:45.674353   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.674363   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:45.674371   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:45.674450   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:45.718577   65605 cri.go:89] found id: ""
	I0723 15:24:45.718608   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.718616   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:45.718622   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:45.718686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:45.758866   65605 cri.go:89] found id: ""
	I0723 15:24:45.758894   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.758901   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:45.758907   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:45.758954   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:45.795098   65605 cri.go:89] found id: ""
	I0723 15:24:45.795124   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.795134   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:45.795148   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:45.795224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:45.832205   65605 cri.go:89] found id: ""
	I0723 15:24:45.832236   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.832257   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:45.832266   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:45.832348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:45.867679   65605 cri.go:89] found id: ""
	I0723 15:24:45.867713   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.867725   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:45.867733   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:45.867799   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:45.904960   65605 cri.go:89] found id: ""
	I0723 15:24:45.904999   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.905010   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:45.905022   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:45.905036   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:45.962373   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:45.962434   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:45.978670   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:45.978715   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:46.050765   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:46.050795   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:46.050811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:46.145347   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:46.145387   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:43.132518   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:45.133735   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:43.399717   65177 pod_ready.go:81] duration metric: took 4m0.000898156s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	E0723 15:24:43.399747   65177 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0723 15:24:43.399766   65177 pod_ready.go:38] duration metric: took 4m8.000231971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:24:43.399796   65177 kubeadm.go:597] duration metric: took 4m15.901150134s to restartPrimaryControlPlane
	W0723 15:24:43.399891   65177 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:43.399930   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:46.154147   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.653381   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.691420   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:48.704605   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:48.704662   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:48.736998   65605 cri.go:89] found id: ""
	I0723 15:24:48.737030   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.737040   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:48.737048   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:48.737116   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:48.770428   65605 cri.go:89] found id: ""
	I0723 15:24:48.770456   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.770466   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:48.770474   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:48.770534   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:48.804036   65605 cri.go:89] found id: ""
	I0723 15:24:48.804063   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.804073   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:48.804080   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:48.804140   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:48.841221   65605 cri.go:89] found id: ""
	I0723 15:24:48.841247   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.841256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:48.841263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:48.841345   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:48.877239   65605 cri.go:89] found id: ""
	I0723 15:24:48.877269   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.877280   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:48.877288   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:48.877348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:48.910120   65605 cri.go:89] found id: ""
	I0723 15:24:48.910144   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.910153   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:48.910161   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:48.910222   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:48.944831   65605 cri.go:89] found id: ""
	I0723 15:24:48.944861   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.944872   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:48.944881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:48.944936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:48.978782   65605 cri.go:89] found id: ""
	I0723 15:24:48.978811   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.978821   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:48.978832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:48.978850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:49.031863   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:49.031900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:49.045173   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:49.045196   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:49.115607   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:49.115632   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:49.115644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:49.195137   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:49.195186   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:51.732915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:51.746885   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:51.746970   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:51.787857   65605 cri.go:89] found id: ""
	I0723 15:24:51.787878   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.787885   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:51.787890   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:51.787933   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:51.826515   65605 cri.go:89] found id: ""
	I0723 15:24:51.826537   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.826545   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:51.826550   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:51.826611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:47.634980   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:50.132905   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.153224   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:53.153400   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.863825   65605 cri.go:89] found id: ""
	I0723 15:24:51.863867   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.863878   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:51.863884   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:51.863936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:51.901367   65605 cri.go:89] found id: ""
	I0723 15:24:51.901403   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.901414   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:51.901422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:51.901474   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:51.933270   65605 cri.go:89] found id: ""
	I0723 15:24:51.933303   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.933314   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:51.933321   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:51.933385   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:51.965174   65605 cri.go:89] found id: ""
	I0723 15:24:51.965205   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.965217   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:51.965227   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:51.965296   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:51.999785   65605 cri.go:89] found id: ""
	I0723 15:24:51.999812   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.999822   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:51.999841   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:51.999914   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:52.035592   65605 cri.go:89] found id: ""
	I0723 15:24:52.035619   65605 logs.go:276] 0 containers: []
	W0723 15:24:52.035630   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:52.035641   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:52.035656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:52.048683   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:52.048711   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:52.112319   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:52.112338   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:52.112351   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:52.196596   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:52.196632   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:52.235608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:52.235635   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:54.786414   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:54.799864   65605 kubeadm.go:597] duration metric: took 4m4.703331486s to restartPrimaryControlPlane
	W0723 15:24:54.799946   65605 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:54.799996   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:52.134857   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:54.633070   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:55.653385   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.154569   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.675405   65605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.875388525s)
	I0723 15:24:58.675461   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:24:58.689878   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:24:58.699568   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:24:58.708541   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:24:58.708559   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:24:58.708604   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:24:58.717055   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:24:58.717108   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:24:58.725736   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:24:58.734127   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:24:58.734227   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:24:58.742862   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.750696   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:24:58.750747   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.759235   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:24:58.768036   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:24:58.768094   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:24:58.777299   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:24:58.976177   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:24:57.133412   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:59.633162   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:00.652486   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.653128   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.654556   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.132762   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.134714   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:06.632391   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:07.152861   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:09.153443   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:08.633329   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.133963   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.652964   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:13.653225   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:14.921745   65177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.521789017s)
	I0723 15:25:14.921814   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:14.937627   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:25:14.948238   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:25:14.958145   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:25:14.958171   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:25:14.958223   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:25:14.967224   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:25:14.967282   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:25:14.975995   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:25:14.984981   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:25:14.985040   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:25:14.993733   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.002214   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:25:15.002265   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.012952   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:25:15.022716   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:25:15.022775   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:25:15.032954   65177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:25:15.081347   65177 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 15:25:15.081412   65177 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:25:15.217189   65177 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:25:15.217316   65177 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:25:15.217421   65177 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:25:15.414012   65177 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:25:15.415975   65177 out.go:204]   - Generating certificates and keys ...
	I0723 15:25:15.416086   65177 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:25:15.416172   65177 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:25:15.416284   65177 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:25:15.416378   65177 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:25:15.416512   65177 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:25:15.416600   65177 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:25:15.416690   65177 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:25:15.416781   65177 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:25:15.416901   65177 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:25:15.417027   65177 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:25:15.417091   65177 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:25:15.417169   65177 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:25:15.577526   65177 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:25:15.771865   65177 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 15:25:15.968841   65177 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:25:16.376626   65177 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:25:16.569425   65177 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:25:16.570004   65177 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:25:16.572623   65177 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:25:13.633779   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.133051   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.574399   65177 out.go:204]   - Booting up control plane ...
	I0723 15:25:16.574516   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:25:16.574622   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:25:16.575046   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:25:16.594177   65177 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:25:16.595205   65177 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:25:16.595310   65177 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:25:16.739893   65177 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 15:25:16.740022   65177 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 15:25:17.242030   65177 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.858581ms
	I0723 15:25:17.242119   65177 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 15:25:15.653757   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.153924   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:20.154226   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.634047   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:21.132773   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:22.244539   65177 kubeadm.go:310] [api-check] The API server is healthy after 5.002291296s
	I0723 15:25:22.260367   65177 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 15:25:22.272659   65177 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 15:25:22.304686   65177 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 15:25:22.304939   65177 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-486436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 15:25:22.318299   65177 kubeadm.go:310] [bootstrap-token] Using token: 1476j9.4ihrwdjbg4aq5odf
	I0723 15:25:22.319736   65177 out.go:204]   - Configuring RBAC rules ...
	I0723 15:25:22.319899   65177 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 15:25:22.329081   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 15:25:22.340687   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 15:25:22.344962   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 15:25:22.348526   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 15:25:22.355955   65177 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 15:25:22.652467   65177 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 15:25:23.122105   65177 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 15:25:23.653074   65177 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 15:25:23.654335   65177 kubeadm.go:310] 
	I0723 15:25:23.654448   65177 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 15:25:23.654461   65177 kubeadm.go:310] 
	I0723 15:25:23.654580   65177 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 15:25:23.654599   65177 kubeadm.go:310] 
	I0723 15:25:23.654648   65177 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 15:25:23.654721   65177 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 15:25:23.654796   65177 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 15:25:23.654821   65177 kubeadm.go:310] 
	I0723 15:25:23.654902   65177 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 15:25:23.654925   65177 kubeadm.go:310] 
	I0723 15:25:23.655000   65177 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 15:25:23.655010   65177 kubeadm.go:310] 
	I0723 15:25:23.655076   65177 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 15:25:23.655174   65177 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 15:25:23.655256   65177 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 15:25:23.655264   65177 kubeadm.go:310] 
	I0723 15:25:23.655352   65177 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 15:25:23.655440   65177 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 15:25:23.655459   65177 kubeadm.go:310] 
	I0723 15:25:23.655579   65177 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.655719   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 15:25:23.655752   65177 kubeadm.go:310] 	--control-plane 
	I0723 15:25:23.655771   65177 kubeadm.go:310] 
	I0723 15:25:23.655896   65177 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 15:25:23.655904   65177 kubeadm.go:310] 
	I0723 15:25:23.656005   65177 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.656141   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 15:25:23.656644   65177 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:25:23.656674   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:25:23.656686   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:25:23.659688   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:25:22.653874   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:24.654172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.133652   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:25.633189   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.660997   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:25:23.671788   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:25:23.692109   65177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:25:23.692195   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:23.692199   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-486436 minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=embed-certs-486436 minikube.k8s.io/primary=true
	I0723 15:25:23.716101   65177 ops.go:34] apiserver oom_adj: -16
	I0723 15:25:23.905952   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.405980   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.906787   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.406096   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.906365   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.406501   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.906068   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.406018   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.907033   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.153085   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.653377   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:27.633816   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.133531   66641 pod_ready.go:81] duration metric: took 4m0.007080073s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:29.133554   66641 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:29.133561   66641 pod_ready.go:38] duration metric: took 4m4.545428088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:29.133577   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:29.133601   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:29.133646   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:29.179796   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:29.179818   66641 cri.go:89] found id: ""
	I0723 15:25:29.179830   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:29.179882   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.184024   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:29.184095   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:29.219711   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:29.219740   66641 cri.go:89] found id: ""
	I0723 15:25:29.219749   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:29.219814   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.223687   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:29.223761   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:29.258473   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:29.258498   66641 cri.go:89] found id: ""
	I0723 15:25:29.258508   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:29.258556   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.262789   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:29.262857   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:29.304206   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.304233   66641 cri.go:89] found id: ""
	I0723 15:25:29.304242   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:29.304306   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.309658   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:29.309735   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:29.361664   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:29.361690   66641 cri.go:89] found id: ""
	I0723 15:25:29.361699   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:29.361758   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.366171   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:29.366248   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:29.414069   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:29.414094   66641 cri.go:89] found id: ""
	I0723 15:25:29.414104   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:29.414162   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.419607   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:29.419678   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:29.464533   66641 cri.go:89] found id: ""
	I0723 15:25:29.464563   66641 logs.go:276] 0 containers: []
	W0723 15:25:29.464573   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:29.464580   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:29.464640   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:29.499966   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:29.499991   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:29.499996   66641 cri.go:89] found id: ""
	I0723 15:25:29.500006   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:29.500063   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.503961   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.508088   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:29.508109   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:29.653373   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:29.653403   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.694171   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:29.694205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:30.262503   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:30.262559   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:30.304038   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:30.304070   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:30.357964   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:30.358013   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:30.372263   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:30.372296   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:30.418543   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:30.418583   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:30.470018   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:30.470050   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:30.503538   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:30.503579   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:30.538515   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:30.538554   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:30.599104   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:30.599137   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:30.635841   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:30.635867   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:28.406535   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:28.906729   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.406804   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.906364   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.406245   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.906646   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.406143   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.906645   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.406411   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.906643   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.653490   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.654773   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.406893   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:33.906016   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.406827   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.906668   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.406337   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.906162   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.406864   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.502155   65177 kubeadm.go:1113] duration metric: took 12.810025657s to wait for elevateKubeSystemPrivileges
	I0723 15:25:36.502200   65177 kubeadm.go:394] duration metric: took 5m9.050239878s to StartCluster
	I0723 15:25:36.502225   65177 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.502332   65177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:25:36.504959   65177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.505284   65177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:25:36.505373   65177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:25:36.505452   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:25:36.505461   65177 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-486436"
	I0723 15:25:36.505486   65177 addons.go:69] Setting metrics-server=true in profile "embed-certs-486436"
	I0723 15:25:36.505494   65177 addons.go:69] Setting default-storageclass=true in profile "embed-certs-486436"
	I0723 15:25:36.505509   65177 addons.go:234] Setting addon metrics-server=true in "embed-certs-486436"
	W0723 15:25:36.505518   65177 addons.go:243] addon metrics-server should already be in state true
	I0723 15:25:36.505535   65177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-486436"
	I0723 15:25:36.505541   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505487   65177 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-486436"
	W0723 15:25:36.505635   65177 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:25:36.505652   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505919   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505938   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505950   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505959   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.505987   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.506050   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.507034   65177 out.go:177] * Verifying Kubernetes components...
	I0723 15:25:36.508493   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:25:36.521500   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0723 15:25:36.521508   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0723 15:25:36.521836   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0723 15:25:36.522060   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522168   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522198   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522626   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522674   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522696   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522710   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522713   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522724   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.523009   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523043   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523309   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523454   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.523518   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523542   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.523629   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523665   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.527348   65177 addons.go:234] Setting addon default-storageclass=true in "embed-certs-486436"
	W0723 15:25:36.527370   65177 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:25:36.527399   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.527752   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.527784   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.540037   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0723 15:25:36.540208   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0723 15:25:36.540572   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.540689   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.541105   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541113   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541122   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541123   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541455   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541454   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541618   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.541686   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.543525   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.543999   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.545455   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0723 15:25:36.545800   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.545846   65177 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:25:36.545906   65177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:25:33.172857   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:33.188951   66641 api_server.go:72] duration metric: took 4m16.32591009s to wait for apiserver process to appear ...
	I0723 15:25:33.188979   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:33.189022   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:33.189077   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:33.228175   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.228204   66641 cri.go:89] found id: ""
	I0723 15:25:33.228213   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:33.228271   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.232451   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:33.232518   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:33.268343   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.268362   66641 cri.go:89] found id: ""
	I0723 15:25:33.268371   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:33.268426   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.272333   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:33.272388   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:33.305913   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.305936   66641 cri.go:89] found id: ""
	I0723 15:25:33.305945   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:33.305998   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.310500   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:33.310573   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:33.345773   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.345798   66641 cri.go:89] found id: ""
	I0723 15:25:33.345807   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:33.345872   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.350031   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:33.350084   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:33.383305   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.383331   66641 cri.go:89] found id: ""
	I0723 15:25:33.383341   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:33.383399   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.387279   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:33.387331   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:33.428442   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.428468   66641 cri.go:89] found id: ""
	I0723 15:25:33.428478   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:33.428676   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.432814   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:33.432879   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:33.469064   66641 cri.go:89] found id: ""
	I0723 15:25:33.469093   66641 logs.go:276] 0 containers: []
	W0723 15:25:33.469105   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:33.469112   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:33.469164   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:33.509131   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:33.509161   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.509168   66641 cri.go:89] found id: ""
	I0723 15:25:33.509177   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:33.509240   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.513478   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.517125   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:33.517152   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.554974   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:33.555004   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.606042   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:33.606074   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.648068   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:33.648100   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:33.698660   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:33.698690   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:33.797480   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:33.797508   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:33.812119   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:33.812146   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.863628   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:33.863661   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.913667   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:33.913695   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.949115   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:33.949144   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.988180   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:33.988205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:34.023679   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:34.023705   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:34.481829   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:34.481886   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:36.546218   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.546238   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.546607   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.547165   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.547209   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.547534   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:25:36.547548   65177 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:25:36.547565   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.547735   65177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.547752   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:25:36.547771   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.551130   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551764   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551767   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.551800   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551844   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551871   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.552160   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.552187   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552429   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552608   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552606   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.552797   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.567445   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0723 15:25:36.567912   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.568411   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.568432   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.568752   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.568949   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.570216   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.570524   65177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.570580   65177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:25:36.570620   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.572949   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573375   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.573402   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573509   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.573658   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.573787   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.573918   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.722640   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:25:36.756372   65177 node_ready.go:35] waiting up to 6m0s for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.779995   65177 node_ready.go:49] node "embed-certs-486436" has status "Ready":"True"
	I0723 15:25:36.780025   65177 node_ready.go:38] duration metric: took 23.62289ms for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.780039   65177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:36.807738   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.810749   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:36.820589   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:25:36.820613   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:25:36.880548   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:25:36.880581   65177 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:25:36.961807   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.962203   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:36.962229   65177 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:25:37.055123   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:37.148724   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.148749   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149038   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149096   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.149114   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.149123   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149412   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149432   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161152   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.161173   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.161477   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.161496   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161496   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.119897   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158050831s)
	I0723 15:25:38.120002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120022   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120358   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.120383   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.120399   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120361   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122012   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122234   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.122252   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.401938   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.346767402s)
	I0723 15:25:38.402002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402019   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402366   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402391   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402401   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402409   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402725   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.402738   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402762   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402773   65177 addons.go:475] Verifying addon metrics-server=true in "embed-certs-486436"
	I0723 15:25:38.404515   65177 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0723 15:25:36.154127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.155104   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.405850   65177 addons.go:510] duration metric: took 1.90047622s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0723 15:25:38.816969   65177 pod_ready.go:102] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:39.316609   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.316632   65177 pod_ready.go:81] duration metric: took 2.505858486s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.316642   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327865   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.327890   65177 pod_ready.go:81] duration metric: took 11.242778ms for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327900   65177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332886   65177 pod_ready.go:92] pod "etcd-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.332914   65177 pod_ready.go:81] duration metric: took 5.006846ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332925   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337166   65177 pod_ready.go:92] pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.337183   65177 pod_ready.go:81] duration metric: took 4.252609ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337198   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341748   65177 pod_ready.go:92] pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.341762   65177 pod_ready.go:81] duration metric: took 4.559215ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341771   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714214   65177 pod_ready.go:92] pod "kube-proxy-wzh4d" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.714237   65177 pod_ready.go:81] duration metric: took 372.459367ms for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714247   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114721   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:40.114744   65177 pod_ready.go:81] duration metric: took 400.490439ms for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114752   65177 pod_ready.go:38] duration metric: took 3.334700958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:40.114765   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:40.114821   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:40.130577   65177 api_server.go:72] duration metric: took 3.625254211s to wait for apiserver process to appear ...
	I0723 15:25:40.130607   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:40.130624   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:25:40.134690   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:25:40.135639   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:40.135658   65177 api_server.go:131] duration metric: took 5.04581ms to wait for apiserver health ...
	I0723 15:25:40.135665   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:40.318436   65177 system_pods.go:59] 9 kube-system pods found
	I0723 15:25:40.318466   65177 system_pods.go:61] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.318471   65177 system_pods.go:61] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.318474   65177 system_pods.go:61] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.318478   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.318481   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.318484   65177 system_pods.go:61] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.318487   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.318492   65177 system_pods.go:61] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.318497   65177 system_pods.go:61] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.318506   65177 system_pods.go:74] duration metric: took 182.836785ms to wait for pod list to return data ...
	I0723 15:25:40.318514   65177 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.514737   65177 default_sa.go:45] found service account: "default"
	I0723 15:25:40.514768   65177 default_sa.go:55] duration metric: took 196.245408ms for default service account to be created ...
	I0723 15:25:40.514779   65177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.718646   65177 system_pods.go:86] 9 kube-system pods found
	I0723 15:25:40.718675   65177 system_pods.go:89] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.718684   65177 system_pods.go:89] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.718690   65177 system_pods.go:89] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.718696   65177 system_pods.go:89] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.718702   65177 system_pods.go:89] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.718707   65177 system_pods.go:89] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.718713   65177 system_pods.go:89] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.718721   65177 system_pods.go:89] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.718728   65177 system_pods.go:89] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.718743   65177 system_pods.go:126] duration metric: took 203.95636ms to wait for k8s-apps to be running ...
	I0723 15:25:40.718756   65177 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.718809   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.733038   65177 system_svc.go:56] duration metric: took 14.275362ms WaitForService to wait for kubelet
	I0723 15:25:40.733069   65177 kubeadm.go:582] duration metric: took 4.227749087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.733088   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.914859   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.914886   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.914898   65177 node_conditions.go:105] duration metric: took 181.804872ms to run NodePressure ...
	I0723 15:25:40.914909   65177 start.go:241] waiting for startup goroutines ...
	I0723 15:25:40.914918   65177 start.go:246] waiting for cluster config update ...
	I0723 15:25:40.914932   65177 start.go:255] writing updated cluster config ...
	I0723 15:25:40.915235   65177 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:40.963735   65177 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:40.966048   65177 out.go:177] * Done! kubectl is now configured to use "embed-certs-486436" cluster and "default" namespace by default
	I0723 15:25:37.033161   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:25:37.039656   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:25:37.040745   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:37.040768   66641 api_server.go:131] duration metric: took 3.851781875s to wait for apiserver health ...
	I0723 15:25:37.040781   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:37.040807   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:37.040868   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:37.090495   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:37.090524   66641 cri.go:89] found id: ""
	I0723 15:25:37.090533   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:37.090608   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.094934   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:37.095005   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:37.138911   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.138937   66641 cri.go:89] found id: ""
	I0723 15:25:37.138947   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:37.139006   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.143876   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:37.143937   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:37.187419   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.187446   66641 cri.go:89] found id: ""
	I0723 15:25:37.187455   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:37.187514   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.191818   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:37.191896   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:37.232332   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:37.232358   66641 cri.go:89] found id: ""
	I0723 15:25:37.232366   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:37.232414   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.236718   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:37.236795   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:37.273231   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.273259   66641 cri.go:89] found id: ""
	I0723 15:25:37.273269   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:37.273339   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.279499   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:37.279575   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:37.316848   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.316867   66641 cri.go:89] found id: ""
	I0723 15:25:37.316875   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:37.316931   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.321920   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:37.321991   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:37.361804   66641 cri.go:89] found id: ""
	I0723 15:25:37.361833   66641 logs.go:276] 0 containers: []
	W0723 15:25:37.361844   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:37.361850   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:37.361909   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:37.401687   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:37.401715   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:37.401720   66641 cri.go:89] found id: ""
	I0723 15:25:37.401729   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:37.401788   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.406444   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.410788   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:37.410812   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:37.427033   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:37.427063   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:37.567851   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:37.567884   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.633966   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:37.634003   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.679663   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:37.679701   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.715046   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:37.715084   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.779870   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:37.779917   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:38.166491   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:38.166527   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:38.222592   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:38.222625   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:38.282823   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:38.282864   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:38.320076   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:38.320114   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:38.361845   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:38.361873   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:38.404791   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:38.404818   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:40.969345   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:25:40.969373   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.969378   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.969384   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.969388   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.969392   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.969395   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.969403   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.969407   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.969419   66641 system_pods.go:74] duration metric: took 3.928631967s to wait for pod list to return data ...
	I0723 15:25:40.969430   66641 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.971647   66641 default_sa.go:45] found service account: "default"
	I0723 15:25:40.971668   66641 default_sa.go:55] duration metric: took 2.232202ms for default service account to be created ...
	I0723 15:25:40.971675   66641 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.976760   66641 system_pods.go:86] 8 kube-system pods found
	I0723 15:25:40.976782   66641 system_pods.go:89] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.976787   66641 system_pods.go:89] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.976793   66641 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.976798   66641 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.976805   66641 system_pods.go:89] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.976809   66641 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.976818   66641 system_pods.go:89] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.976825   66641 system_pods.go:89] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.976832   66641 system_pods.go:126] duration metric: took 5.152102ms to wait for k8s-apps to be running ...
	I0723 15:25:40.976838   66641 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.976875   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.996951   66641 system_svc.go:56] duration metric: took 20.10286ms WaitForService to wait for kubelet
	I0723 15:25:40.996983   66641 kubeadm.go:582] duration metric: took 4m24.133944078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.997007   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.999958   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.999980   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.999991   66641 node_conditions.go:105] duration metric: took 2.97868ms to run NodePressure ...
	I0723 15:25:41.000002   66641 start.go:241] waiting for startup goroutines ...
	I0723 15:25:41.000008   66641 start.go:246] waiting for cluster config update ...
	I0723 15:25:41.000017   66641 start.go:255] writing updated cluster config ...
	I0723 15:25:41.000292   66641 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:41.058447   66641 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:41.060584   66641 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-911217" cluster and "default" namespace by default
	I0723 15:25:40.652692   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:42.653402   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:44.653499   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:47.153167   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:49.652723   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:51.653106   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:54.152382   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.153666   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.652308   64842 pod_ready.go:81] duration metric: took 4m0.005573507s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:56.652340   64842 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:56.652348   64842 pod_ready.go:38] duration metric: took 4m3.607231702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:56.652364   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:56.652389   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:56.652432   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:56.709002   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:56.709024   64842 cri.go:89] found id: ""
	I0723 15:25:56.709031   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:25:56.709076   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.713436   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:56.713496   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:56.748180   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:56.748203   64842 cri.go:89] found id: ""
	I0723 15:25:56.748212   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:25:56.748267   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.753878   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:56.753950   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:56.790420   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:56.790443   64842 cri.go:89] found id: ""
	I0723 15:25:56.790450   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:25:56.790503   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.794360   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:56.794430   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:56.833056   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:56.833084   64842 cri.go:89] found id: ""
	I0723 15:25:56.833093   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:25:56.833158   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.838040   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:56.838097   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:56.877548   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:56.877569   64842 cri.go:89] found id: ""
	I0723 15:25:56.877576   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:25:56.877620   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.881682   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:56.881754   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:56.931794   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:56.931821   64842 cri.go:89] found id: ""
	I0723 15:25:56.931831   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:25:56.931903   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.936454   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:56.936529   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:56.974347   64842 cri.go:89] found id: ""
	I0723 15:25:56.974373   64842 logs.go:276] 0 containers: []
	W0723 15:25:56.974401   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:56.974411   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:56.974595   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:57.008960   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:57.008986   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:25:57.008990   64842 cri.go:89] found id: ""
	I0723 15:25:57.008997   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:25:57.009044   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.013403   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.017022   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:57.017041   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:57.031010   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:57.031038   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:57.162515   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:25:57.162548   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:57.202805   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:25:57.202840   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:57.238593   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:57.238622   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:57.740811   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:25:57.740854   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:57.786125   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:57.786154   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:57.839346   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:25:57.839389   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:57.885507   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:25:57.885545   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:57.923025   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:25:57.923058   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:57.961082   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:25:57.961112   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:58.013561   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:25:58.013602   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:58.051695   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:25:58.051733   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.585802   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:26:00.601135   64842 api_server.go:72] duration metric: took 4m14.792155211s to wait for apiserver process to appear ...
	I0723 15:26:00.601167   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:26:00.601210   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:00.601269   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:00.641653   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:00.641678   64842 cri.go:89] found id: ""
	I0723 15:26:00.641687   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:00.641751   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.645831   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:00.645886   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:00.684737   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:00.684763   64842 cri.go:89] found id: ""
	I0723 15:26:00.684773   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:00.684836   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.689094   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:00.689140   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:00.725761   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:00.725787   64842 cri.go:89] found id: ""
	I0723 15:26:00.725795   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:00.725838   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.729843   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:00.729928   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:00.769870   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:00.769890   64842 cri.go:89] found id: ""
	I0723 15:26:00.769897   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:00.769942   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.774178   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:00.774235   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:00.816236   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:00.816261   64842 cri.go:89] found id: ""
	I0723 15:26:00.816268   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:00.816315   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.820577   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:00.820632   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:00.866824   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:00.866849   64842 cri.go:89] found id: ""
	I0723 15:26:00.866857   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:00.866910   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.871035   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:00.871089   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:00.913991   64842 cri.go:89] found id: ""
	I0723 15:26:00.914020   64842 logs.go:276] 0 containers: []
	W0723 15:26:00.914029   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:00.914035   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:00.914091   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:00.954766   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:00.954789   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.954795   64842 cri.go:89] found id: ""
	I0723 15:26:00.954804   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:00.954855   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.959067   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.962784   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:00.962807   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.998749   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:00.998781   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:01.454863   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:01.454902   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:01.505800   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:01.505829   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:01.555977   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:01.556008   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:01.591914   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:01.591942   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:01.649054   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:01.649083   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:01.682090   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:01.682116   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:01.721805   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:01.721832   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:01.758403   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:01.758432   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:01.808766   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:01.808803   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:01.823556   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:01.823589   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:01.936323   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:01.936355   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.478126   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:26:04.483667   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:26:04.484710   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:26:04.484730   64842 api_server.go:131] duration metric: took 3.883557615s to wait for apiserver health ...
	I0723 15:26:04.484737   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:26:04.484759   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:04.484810   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:04.522732   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:04.522757   64842 cri.go:89] found id: ""
	I0723 15:26:04.522766   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:04.522825   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.526922   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:04.526986   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:04.572736   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.572761   64842 cri.go:89] found id: ""
	I0723 15:26:04.572770   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:04.572828   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.576911   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:04.576966   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:04.612283   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.612310   64842 cri.go:89] found id: ""
	I0723 15:26:04.612318   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:04.612367   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.616609   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:04.616660   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:04.653775   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:04.653800   64842 cri.go:89] found id: ""
	I0723 15:26:04.653808   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:04.653883   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.658242   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:04.658298   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:04.699132   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.699155   64842 cri.go:89] found id: ""
	I0723 15:26:04.699164   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:04.699225   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.703672   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:04.703735   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:04.740522   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:04.740541   64842 cri.go:89] found id: ""
	I0723 15:26:04.740548   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:04.740605   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.745065   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:04.745134   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:04.779209   64842 cri.go:89] found id: ""
	I0723 15:26:04.779234   64842 logs.go:276] 0 containers: []
	W0723 15:26:04.779242   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:04.779255   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:04.779321   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:04.816696   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.816713   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:04.816718   64842 cri.go:89] found id: ""
	I0723 15:26:04.816728   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:04.816777   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.820775   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.824335   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:04.824362   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.865073   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:04.865105   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.903588   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:04.903617   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.939994   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:04.940022   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.976373   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:04.976402   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:05.355834   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:05.355877   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:05.410198   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:05.410228   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:05.424358   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:05.424391   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:05.464494   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:05.464526   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:05.496709   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:05.496736   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:05.534919   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:05.534959   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:05.640875   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:05.640913   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:05.678050   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:05.678078   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:08.236070   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:26:08.236336   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.236346   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.236351   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.236354   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.236357   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.236360   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.236368   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.236376   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.236382   64842 system_pods.go:74] duration metric: took 3.751640289s to wait for pod list to return data ...
	I0723 15:26:08.236391   64842 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:26:08.239339   64842 default_sa.go:45] found service account: "default"
	I0723 15:26:08.239367   64842 default_sa.go:55] duration metric: took 2.96931ms for default service account to be created ...
	I0723 15:26:08.239378   64842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:26:08.244406   64842 system_pods.go:86] 8 kube-system pods found
	I0723 15:26:08.244432   64842 system_pods.go:89] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.244438   64842 system_pods.go:89] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.244442   64842 system_pods.go:89] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.244447   64842 system_pods.go:89] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.244451   64842 system_pods.go:89] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.244455   64842 system_pods.go:89] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.244462   64842 system_pods.go:89] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.244468   64842 system_pods.go:89] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.244474   64842 system_pods.go:126] duration metric: took 5.091237ms to wait for k8s-apps to be running ...
	I0723 15:26:08.244481   64842 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:26:08.244521   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:08.260574   64842 system_svc.go:56] duration metric: took 16.083672ms WaitForService to wait for kubelet
	I0723 15:26:08.260610   64842 kubeadm.go:582] duration metric: took 4m22.451635049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:26:08.260634   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:26:08.263927   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:26:08.263954   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:26:08.263966   64842 node_conditions.go:105] duration metric: took 3.324706ms to run NodePressure ...
	I0723 15:26:08.263977   64842 start.go:241] waiting for startup goroutines ...
	I0723 15:26:08.263983   64842 start.go:246] waiting for cluster config update ...
	I0723 15:26:08.263992   64842 start.go:255] writing updated cluster config ...
	I0723 15:26:08.264250   64842 ssh_runner.go:195] Run: rm -f paused
	I0723 15:26:08.312776   64842 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0723 15:26:08.315009   64842 out.go:177] * Done! kubectl is now configured to use "no-preload-543029" cluster and "default" namespace by default
	I0723 15:26:54.925074   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:26:54.925180   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:26:54.926872   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:54.926940   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:54.927022   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:54.927137   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:54.927252   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:54.927339   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:54.929261   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:54.929337   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:54.929399   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:54.929472   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:54.929580   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:54.929678   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:54.929758   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:54.929836   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:54.929924   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:54.930026   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:54.930118   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:54.930165   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:54.930210   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:54.930257   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:54.930300   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:54.930371   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:54.930438   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:54.930535   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:54.930631   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:54.930663   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:54.930752   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:54.932218   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:54.932344   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:54.932445   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:54.932537   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:54.932653   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:54.932869   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:26:54.932943   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:26:54.933025   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933337   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933600   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933701   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933890   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933995   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934331   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934535   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934546   65605 kubeadm.go:310] 
	I0723 15:26:54.934600   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:26:54.934663   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:26:54.934673   65605 kubeadm.go:310] 
	I0723 15:26:54.934723   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:26:54.934762   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:26:54.934848   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:26:54.934855   65605 kubeadm.go:310] 
	I0723 15:26:54.934948   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:26:54.934979   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:26:54.935026   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:26:54.935034   65605 kubeadm.go:310] 
	I0723 15:26:54.935136   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:26:54.935255   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:26:54.935265   65605 kubeadm.go:310] 
	I0723 15:26:54.935410   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:26:54.935519   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:26:54.935578   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:26:54.935637   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:26:54.935693   65605 kubeadm.go:310] 
	W0723 15:26:54.935756   65605 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:26:54.935811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:26:55.388601   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:55.402519   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:26:55.412031   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:26:55.412054   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:26:55.412097   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:26:55.423092   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:26:55.423146   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:26:55.432321   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:26:55.441379   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:26:55.441447   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:26:55.450733   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.459263   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:26:55.459333   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.468488   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:26:55.477223   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:26:55.477277   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:26:55.485924   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:26:55.555024   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:55.555097   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:55.695658   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:55.695814   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:55.695939   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:55.867103   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:55.870203   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:55.870299   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:55.870407   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:55.870490   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:55.870568   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:55.870655   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:55.870733   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:55.870813   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:55.870861   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:55.870920   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:55.870985   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:55.871016   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:55.871063   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:55.963452   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:56.554450   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:57.109698   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:57.223533   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:57.243368   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:57.244331   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:57.244378   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:57.375340   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:57.377119   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:57.377234   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:57.386697   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:57.388552   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:57.389505   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:57.391792   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:27:37.394425   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:27:37.394534   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:37.394766   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:42.395393   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:42.395663   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:52.395847   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:52.396071   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:12.396192   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:12.396413   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395047   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:52.395369   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395384   65605 kubeadm.go:310] 
	I0723 15:28:52.395457   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:28:52.395531   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:28:52.395542   65605 kubeadm.go:310] 
	I0723 15:28:52.395588   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:28:52.395619   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:28:52.395780   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:28:52.395809   65605 kubeadm.go:310] 
	I0723 15:28:52.395964   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:28:52.396028   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:28:52.396084   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:28:52.396095   65605 kubeadm.go:310] 
	I0723 15:28:52.396194   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:28:52.396276   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:28:52.396286   65605 kubeadm.go:310] 
	I0723 15:28:52.396449   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:28:52.396552   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:28:52.396649   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:28:52.396744   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:28:52.396752   65605 kubeadm.go:310] 
	I0723 15:28:52.397220   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:28:52.397322   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:28:52.397397   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:28:52.397473   65605 kubeadm.go:394] duration metric: took 8m2.354906945s to StartCluster
	I0723 15:28:52.397516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:28:52.397573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:28:52.442298   65605 cri.go:89] found id: ""
	I0723 15:28:52.442328   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.442339   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:28:52.442347   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:28:52.442422   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:28:52.476108   65605 cri.go:89] found id: ""
	I0723 15:28:52.476131   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.476138   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:28:52.476144   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:28:52.476205   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:28:52.511118   65605 cri.go:89] found id: ""
	I0723 15:28:52.511143   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.511152   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:28:52.511159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:28:52.511224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:28:52.544901   65605 cri.go:89] found id: ""
	I0723 15:28:52.544934   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.544946   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:28:52.544954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:28:52.545020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:28:52.580472   65605 cri.go:89] found id: ""
	I0723 15:28:52.580494   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.580501   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:28:52.580515   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:28:52.580577   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:28:52.613777   65605 cri.go:89] found id: ""
	I0723 15:28:52.613808   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.613818   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:28:52.613826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:28:52.613894   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:28:52.650831   65605 cri.go:89] found id: ""
	I0723 15:28:52.650961   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.650974   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:28:52.650982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:28:52.651048   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:28:52.684805   65605 cri.go:89] found id: ""
	I0723 15:28:52.684833   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.684845   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:28:52.684857   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:28:52.684873   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:28:52.787532   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:28:52.787583   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:28:52.843947   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:28:52.843979   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:28:52.894679   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:28:52.894714   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:28:52.910794   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:28:52.910821   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:28:52.989285   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0723 15:28:52.989325   65605 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:28:52.989368   65605 out.go:239] * 
	W0723 15:28:52.989432   65605 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.989465   65605 out.go:239] * 
	W0723 15:28:52.990350   65605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:28:52.993770   65605 out.go:177] 
	W0723 15:28:52.995023   65605 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.995076   65605 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:28:52.995095   65605 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:28:52.996528   65605 out.go:177] 
	
	
	==> CRI-O <==
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.255213185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749078255184880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a64f456-e8cc-4316-a397-2e15596c7a83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.255778966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61b04ddd-9aa0-490c-8eed-e9dd960529c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.255831352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61b04ddd-9aa0-490c-8eed-e9dd960529c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.255868106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=61b04ddd-9aa0-490c-8eed-e9dd960529c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.288121055Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fb2d42f-fe7f-425a-96ec-90d28ba457ec name=/runtime.v1.RuntimeService/Version
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.288199285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fb2d42f-fe7f-425a-96ec-90d28ba457ec name=/runtime.v1.RuntimeService/Version
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.289774648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ea018f7-41a3-418d-92f4-04a053ddad2d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.290298401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749078290261276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ea018f7-41a3-418d-92f4-04a053ddad2d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.290845390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddede832-03f7-45d4-99e5-84a79e965ccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.290918767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddede832-03f7-45d4-99e5-84a79e965ccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.290981600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ddede832-03f7-45d4-99e5-84a79e965ccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.319920615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e5e5f12-690b-4877-b46c-5e80997c0528 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.320019639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e5e5f12-690b-4877-b46c-5e80997c0528 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.321226917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ca889a3-dd39-4df0-a55e-63b6828182b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.321710422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749078321683528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ca889a3-dd39-4df0-a55e-63b6828182b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.322329033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=184ad1bb-0a5f-476f-a98b-f01b93c45f6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.322405764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=184ad1bb-0a5f-476f-a98b-f01b93c45f6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.322503041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=184ad1bb-0a5f-476f-a98b-f01b93c45f6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.355079365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83c44b13-f278-42cc-bcf8-87ffc5315e68 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.355179472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83c44b13-f278-42cc-bcf8-87ffc5315e68 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.356544613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b019d84-dfa7-469a-bc39-50f74cc02a57 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.356954578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749078356932855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b019d84-dfa7-469a-bc39-50f74cc02a57 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.357587033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5f13249-5935-48bb-9f1b-c0018d17c58e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.357645816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5f13249-5935-48bb-9f1b-c0018d17c58e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:37:58 old-k8s-version-000272 crio[653]: time="2024-07-23 15:37:58.357676708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e5f13249-5935-48bb-9f1b-c0018d17c58e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul23 15:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051105] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.906859] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.937543] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.495630] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.117641] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061578] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.222393] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.111093] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.239582] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.000298] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.060522] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958927] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Jul23 15:21] kauditd_printk_skb: 46 callbacks suppressed
	[Jul23 15:24] systemd-fstab-generator[5081]: Ignoring "noauto" option for root device
	[Jul23 15:26] systemd-fstab-generator[5360]: Ignoring "noauto" option for root device
	[  +0.066445] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:37:58 up 17 min,  0 users,  load average: 0.00, 0.01, 0.02
	Linux old-k8s-version-000272 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: goroutine 113 [select]:
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0008884b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000dd7320, 0x0, 0x0)
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00085da40)
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: goroutine 146 [syscall]:
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: syscall.Syscall6(0xe8, 0xe, 0xc0007abb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xe, 0xc0007abb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000d12120, 0x0, 0x0, 0x0)
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000888a00)
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 23 15:37:58 old-k8s-version-000272 kubelet[6549]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 23 15:37:58 old-k8s-version-000272 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 23 15:37:58 old-k8s-version-000272 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (220.605441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-000272" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (395.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-486436 -n embed-certs-486436
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-23 15:41:19.888708553 +0000 UTC m=+6289.094453301
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-486436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-486436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.391µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-486436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-486436 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-486436 logs -n 25: (1.233730488s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:39 UTC | 23 Jul 24 15:39 UTC |
	| start   | -p newest-cni-459494 --memory=2200 --alsologtostderr   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:39 UTC | 23 Jul 24 15:40 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-459494             | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC | 23 Jul 24 15:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-459494                                   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC | 23 Jul 24 15:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-459494                  | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC | 23 Jul 24 15:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-459494 --memory=2200 --alsologtostderr   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC | 23 Jul 24 15:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-543029                                   | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC | 23 Jul 24 15:41 UTC |
	| start   | -p auto-562147 --memory=3072                           | auto-562147                  | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-459494 image list                           | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC | 23 Jul 24 15:41 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-459494                                   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC | 23 Jul 24 15:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-459494                                   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC | 23 Jul 24 15:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-459494                                   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC | 23 Jul 24 15:41 UTC |
	| delete  | -p newest-cni-459494                                   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC | 23 Jul 24 15:41 UTC |
	| start   | -p kindnet-562147                                      | kindnet-562147               | jenkins | v1.33.1 | 23 Jul 24 15:41 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:41:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:41:14.553386   73933 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:41:14.553505   73933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:41:14.553514   73933 out.go:304] Setting ErrFile to fd 2...
	I0723 15:41:14.553521   73933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:41:14.553829   73933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:41:14.554559   73933 out.go:298] Setting JSON to false
	I0723 15:41:14.555833   73933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8621,"bootTime":1721740654,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:41:14.555904   73933 start.go:139] virtualization: kvm guest
	I0723 15:41:14.558221   73933 out.go:177] * [kindnet-562147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:41:14.559956   73933 notify.go:220] Checking for updates...
	I0723 15:41:14.559992   73933 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:41:14.561557   73933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:41:14.563234   73933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:41:14.564785   73933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:41:14.566258   73933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:41:14.567780   73933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:41:14.569675   73933 config.go:182] Loaded profile config "auto-562147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:41:14.569823   73933 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:41:14.569951   73933 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:41:14.570074   73933 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:41:14.608189   73933 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 15:41:14.609556   73933 start.go:297] selected driver: kvm2
	I0723 15:41:14.609572   73933 start.go:901] validating driver "kvm2" against <nil>
	I0723 15:41:14.609583   73933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:41:14.610363   73933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:41:14.610476   73933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:41:14.626799   73933 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:41:14.626851   73933 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 15:41:14.627151   73933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:41:14.627189   73933 cni.go:84] Creating CNI manager for "kindnet"
	I0723 15:41:14.627205   73933 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 15:41:14.627271   73933 start.go:340] cluster config:
	{Name:kindnet-562147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-562147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:41:14.627439   73933 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:41:14.629397   73933 out.go:177] * Starting "kindnet-562147" primary control-plane node in "kindnet-562147" cluster
	I0723 15:41:14.131985   73413 main.go:141] libmachine: (auto-562147) DBG | domain auto-562147 has defined MAC address 52:54:00:9d:d4:58 in network mk-auto-562147
	I0723 15:41:14.132554   73413 main.go:141] libmachine: (auto-562147) DBG | unable to find current IP address of domain auto-562147 in network mk-auto-562147
	I0723 15:41:14.132598   73413 main.go:141] libmachine: (auto-562147) DBG | I0723 15:41:14.132527   73435 retry.go:31] will retry after 1.074052825s: waiting for machine to come up
	I0723 15:41:15.208033   73413 main.go:141] libmachine: (auto-562147) DBG | domain auto-562147 has defined MAC address 52:54:00:9d:d4:58 in network mk-auto-562147
	I0723 15:41:15.208534   73413 main.go:141] libmachine: (auto-562147) DBG | unable to find current IP address of domain auto-562147 in network mk-auto-562147
	I0723 15:41:15.208565   73413 main.go:141] libmachine: (auto-562147) DBG | I0723 15:41:15.208472   73435 retry.go:31] will retry after 1.185838252s: waiting for machine to come up
	I0723 15:41:16.395515   73413 main.go:141] libmachine: (auto-562147) DBG | domain auto-562147 has defined MAC address 52:54:00:9d:d4:58 in network mk-auto-562147
	I0723 15:41:16.396061   73413 main.go:141] libmachine: (auto-562147) DBG | unable to find current IP address of domain auto-562147 in network mk-auto-562147
	I0723 15:41:16.396087   73413 main.go:141] libmachine: (auto-562147) DBG | I0723 15:41:16.396000   73435 retry.go:31] will retry after 1.11936555s: waiting for machine to come up
	I0723 15:41:17.516964   73413 main.go:141] libmachine: (auto-562147) DBG | domain auto-562147 has defined MAC address 52:54:00:9d:d4:58 in network mk-auto-562147
	I0723 15:41:17.517288   73413 main.go:141] libmachine: (auto-562147) DBG | unable to find current IP address of domain auto-562147 in network mk-auto-562147
	I0723 15:41:17.517318   73413 main.go:141] libmachine: (auto-562147) DBG | I0723 15:41:17.517245   73435 retry.go:31] will retry after 1.598656319s: waiting for machine to come up
	I0723 15:41:14.630890   73933 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:41:14.630947   73933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:41:14.630960   73933 cache.go:56] Caching tarball of preloaded images
	I0723 15:41:14.631053   73933 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:41:14.631067   73933 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:41:14.631189   73933 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kindnet-562147/config.json ...
	I0723 15:41:14.631213   73933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/kindnet-562147/config.json: {Name:mkaa58039c39108bc641e77a4c7d738a007275db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:41:14.631378   73933 start.go:360] acquireMachinesLock for kindnet-562147: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.513675064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749280513652102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8914b50-6337-4d7c-ac02-b681eef20844 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.514373323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5e997f4-6196-4e7e-9865-e9b73596e77c name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.514485953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5e997f4-6196-4e7e-9865-e9b73596e77c name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.514677748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5e997f4-6196-4e7e-9865-e9b73596e77c name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.558506605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c26601c0-221d-4146-925d-783c087a1ba6 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.558614999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c26601c0-221d-4146-925d-783c087a1ba6 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.559595904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8492baf-22c4-4c65-b2f1-443630824482 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.560208797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749280560177386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8492baf-22c4-4c65-b2f1-443630824482 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.560847408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1459d8e8-99a8-4d9b-b33c-d4241072342e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.560919030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1459d8e8-99a8-4d9b-b33c-d4241072342e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.561781798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1459d8e8-99a8-4d9b-b33c-d4241072342e name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.607277821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d353bf18-ddb8-4512-9529-bc3fe158a7d1 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.607437713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d353bf18-ddb8-4512-9529-bc3fe158a7d1 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.608828571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=939728fe-e81e-476e-b74c-c6d8b76c5deb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.609775229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749280609744571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=939728fe-e81e-476e-b74c-c6d8b76c5deb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.610421191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f0c43a5-eb3c-4c42-8422-83574282b53f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.610506296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f0c43a5-eb3c-4c42-8422-83574282b53f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.610751515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f0c43a5-eb3c-4c42-8422-83574282b53f name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.646763893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0224a986-5912-40da-8d91-206e2f7bdee2 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.646850738Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0224a986-5912-40da-8d91-206e2f7bdee2 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.647864770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57e508dd-6b60-4cfd-aedb-b63d1f3f606b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.648643396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749280648616432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57e508dd-6b60-4cfd-aedb-b63d1f3f606b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.649235122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06e4fe9e-d617-4400-993b-2767cb91db57 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.649285824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06e4fe9e-d617-4400-993b-2767cb91db57 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:20 embed-certs-486436 crio[727]: time="2024-07-23 15:41:20.649514592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895,PodSandboxId:870b02d3c5612615453d97ead73ff7010a6bc2655d0184958ebe5c80e71b6e7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748338498427382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7dedd-e070-447a-b57a-9f19d00fb80b,},Annotations:map[string]string{io.kubernetes.container.hash: edcf8efa,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e,PodSandboxId:64a2208f90f3e02897873635adc8172e36f8ac304782531ad0cf545a2846cfab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337890207347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hnlc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da0e07-9db4-423d-b833-ee598822f88f,},Annotations:map[string]string{io.kubernetes.container.hash: 3cb2aae4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa,PodSandboxId:0ac3f1de36b4656efeeb0fa99560d5439875df21760d22cdf4a9f306067b701d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748337822875290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj5xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
ca106cd-e6ab-4dc7-a602-3b304401d255,},Annotations:map[string]string{io.kubernetes.container.hash: ac977a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd,PodSandboxId:fdd86191b356ff9e40478d12ffe8531d5b8dfb497f82e9ea3672350887657705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1721748337161796096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzh4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838e5bd5-75c9-4dcd-a49b-cd09b0bad7af,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3f38df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d,PodSandboxId:27204d27f928e9087e14a7022b304ab187b9ef4f668499e243cf62b4b87bbae8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748317693838268,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e341019cf5cf6f784054989fb0e0be,},Annotations:map[string]string{io.kubernetes.container.hash: bd53b1ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18,PodSandboxId:9a913748a4b9f027c36fffb05930815ee16e5630516354ca3c4343339e739a0c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748317679033632,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b38ed67e1f46d67d617ae7532e80d7,},Annotations:map[string]string{io.kubernetes.container.hash: 288df32b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047,PodSandboxId:9bfe35d814868c99fc327993599ecce68edf263a12c913d0f8a22822474c522f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748317688775919,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 461a1b0ee88cf7ed96e731c39e5ecc99,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa,PodSandboxId:c8cf85132d12fea4cedbff80fab188aba474fa5934faf254d483e87b66cc612e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748317594609083,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-486436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74c9273d459fb9a6ab370c223b5c34a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06e4fe9e-d617-4400-993b-2767cb91db57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2df1371fcdf71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   870b02d3c5612       storage-provisioner
	6875dba4151da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   64a2208f90f3e       coredns-7db6d8ff4d-hnlc7
	58c6510eb089a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   0ac3f1de36b46       coredns-7db6d8ff4d-lj5xg
	f1f141eaef2ba       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   15 minutes ago      Running             kube-proxy                0                   fdd86191b356f       kube-proxy-wzh4d
	ff5f662fc4f45       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   27204d27f928e       etcd-embed-certs-486436
	d96dc2ceb2625       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   9bfe35d814868       kube-scheduler-embed-certs-486436
	57cec121ce037       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   9a913748a4b9f       kube-apiserver-embed-certs-486436
	b7c481c754ef1       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   c8cf85132d12f       kube-controller-manager-embed-certs-486436
	
	
	==> coredns [58c6510eb089aa59abfed28b83ea21d376c7db62d8605ac77f7d545080607aaa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [6875dba4151da5a16d271dc3f024e19951dfae1a6b90617c8dc018a72ad0ac7e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-486436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-486436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=embed-certs-486436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:25:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-486436
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:41:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:41:01 +0000   Tue, 23 Jul 2024 15:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:41:01 +0000   Tue, 23 Jul 2024 15:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:41:01 +0000   Tue, 23 Jul 2024 15:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:41:01 +0000   Tue, 23 Jul 2024 15:25:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    embed-certs-486436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14762c4ab825492d956123b475a79cfa
	  System UUID:                14762c4a-b825-492d-9561-23b475a79cfa
	  Boot ID:                    670dbae9-a5f4-4314-956d-1b105e1f2510
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hnlc7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-lj5xg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-486436                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-486436             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-486436    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-wzh4d                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-486436             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-7l2jw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-486436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-486436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-486436 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-486436 event: Registered Node embed-certs-486436 in Controller
	
	
	==> dmesg <==
	[  +0.050392] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036089] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.692943] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.901446] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.511879] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.838121] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.055462] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064265] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.169480] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.146275] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.297435] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.147752] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +1.899642] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.060656] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.538626] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.584489] kauditd_printk_skb: 79 callbacks suppressed
	[Jul23 15:25] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.733194] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	[  +4.438462] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.635351] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	[ +13.890636] systemd-fstab-generator[4098]: Ignoring "noauto" option for root device
	[  +0.099490] kauditd_printk_skb: 14 callbacks suppressed
	[Jul23 15:26] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [ff5f662fc4f451ff0c25853f179f6ea6240823d1eb100f260ca5f4cb126ae55d] <==
	{"level":"info","ts":"2024-07-23T15:25:18.658202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-07-23T15:25:18.663369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:25:18.66363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:25:18.66366Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:25:18.665127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T15:25:18.692215Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:25:18.694792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:25:18.712384Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	2024/07/23 15:25:22 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-23T15:35:18.707401Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-07-23T15:35:18.716261Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"8.387759ms","hash":2999925534,"current-db-size-bytes":2236416,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2236416,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-23T15:35:18.716424Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2999925534,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-07-23T15:40:18.714196Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-23T15:40:18.719314Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"4.245142ms","hash":2882185887,"current-db-size-bytes":2236416,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1572864,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-23T15:40:18.719487Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2882185887,"revision":927,"compact-revision":684}
	{"level":"info","ts":"2024-07-23T15:41:01.553327Z","caller":"traceutil/trace.go:171","msg":"trace[850107827] transaction","detail":"{read_only:false; response_revision:1206; number_of_response:1; }","duration":"145.040158ms","start":"2024-07-23T15:41:01.40824Z","end":"2024-07-23T15:41:01.55328Z","steps":["trace[850107827] 'process raft request'  (duration: 144.954071ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:41:02.04118Z","caller":"traceutil/trace.go:171","msg":"trace[160851825] linearizableReadLoop","detail":"{readStateIndex:1409; appliedIndex:1408; }","duration":"132.583722ms","start":"2024-07-23T15:41:01.908582Z","end":"2024-07-23T15:41:02.041166Z","steps":["trace[160851825] 'read index received'  (duration: 132.459551ms)","trace[160851825] 'applied index is now lower than readState.Index'  (duration: 123.493µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T15:41:02.04128Z","caller":"traceutil/trace.go:171","msg":"trace[907301838] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"242.057589ms","start":"2024-07-23T15:41:01.799214Z","end":"2024-07-23T15:41:02.041271Z","steps":["trace[907301838] 'process raft request'  (duration: 241.833554ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:41:02.041704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.053273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:41:02.041801Z","caller":"traceutil/trace.go:171","msg":"trace[566923483] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1207; }","duration":"133.233418ms","start":"2024-07-23T15:41:01.908557Z","end":"2024-07-23T15:41:02.041791Z","steps":["trace[566923483] 'agreement among raft nodes before linearized reading'  (duration: 133.059524ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:41:02.356856Z","caller":"traceutil/trace.go:171","msg":"trace[145792176] linearizableReadLoop","detail":"{readStateIndex:1410; appliedIndex:1409; }","duration":"312.198811ms","start":"2024-07-23T15:41:02.044642Z","end":"2024-07-23T15:41:02.356841Z","steps":["trace[145792176] 'read index received'  (duration: 270.02253ms)","trace[145792176] 'applied index is now lower than readState.Index'  (duration: 42.174259ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T15:41:02.35694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.283468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:41:02.356961Z","caller":"traceutil/trace.go:171","msg":"trace[1061243714] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1207; }","duration":"312.336055ms","start":"2024-07-23T15:41:02.04462Z","end":"2024-07-23T15:41:02.356956Z","steps":["trace[1061243714] 'agreement among raft nodes before linearized reading'  (duration: 312.284027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:41:02.356988Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T15:41:02.044611Z","time spent":"312.364094ms","remote":"127.0.0.1:35176","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-23T15:41:02.357159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T15:41:02.044413Z","time spent":"312.743321ms","remote":"127.0.0.1:35210","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> kernel <==
	 15:41:21 up 21 min,  0 users,  load average: 0.08, 0.11, 0.09
	Linux embed-certs-486436 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57cec121ce037d5f4a48684c699b12070e255c11e5b120b8e5b74b8975f59a18] <==
	I0723 15:35:21.323651       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:36:21.323581       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:36:21.323674       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:36:21.323688       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:36:21.323786       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:36:21.323855       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:36:21.325052       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:38:21.323855       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:38:21.324220       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:38:21.324271       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:38:21.326151       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:38:21.326250       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:38:21.326260       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:40:20.326017       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:40:20.326189       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0723 15:40:21.326491       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:40:21.326617       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:40:21.326656       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:40:21.326536       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:40:21.326773       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:40:21.327720       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b7c481c754ef10484ceea394176f362eb551759610024b318ac4be17703005fa] <==
	I0723 15:35:36.379436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:36:05.902002       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:36:06.386407       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:36:35.910502       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:36:36.394493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:36:36.984832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="201.073µs"
	I0723 15:36:51.975913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="136.048µs"
	E0723 15:37:05.915011       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:37:06.402716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:37:35.921386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:37:36.410200       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:38:05.926697       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:38:06.418556       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:38:35.932540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:38:36.426899       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:39:05.937874       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:39:06.433774       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:39:35.946433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:39:36.441219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:40:05.951030       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:40:06.450773       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:40:35.956801       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:40:36.458108       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:41:05.962928       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:41:06.467081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f1f141eaef2ba2027906be08ccd4beffd400c1ae2278b91b1c3a8890bbcec5dd] <==
	I0723 15:25:37.498824       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:25:37.516043       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0723 15:25:37.620717       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 15:25:37.620752       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 15:25:37.620768       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:25:37.638743       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:25:37.638965       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:25:37.638985       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:25:37.650101       1 config.go:192] "Starting service config controller"
	I0723 15:25:37.650131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:25:37.650159       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:25:37.650162       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:25:37.650179       1 config.go:319] "Starting node config controller"
	I0723 15:25:37.650182       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:25:37.750522       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:25:37.750651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:25:37.750450       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d96dc2ceb2625b36d4e6a9e517db3dbf3d5c49f9114f64ef41d677e619e1f047] <==
	W0723 15:25:20.368870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 15:25:20.370412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 15:25:20.372434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 15:25:20.372524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 15:25:21.174323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:25:21.174391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:25:21.177404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0723 15:25:21.177429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0723 15:25:21.203853       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 15:25:21.203970       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 15:25:21.241613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 15:25:21.242017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 15:25:21.316484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0723 15:25:21.316537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0723 15:25:21.390277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 15:25:21.390383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 15:25:21.396321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 15:25:21.396423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 15:25:21.476967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:25:21.477065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:25:21.526210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 15:25:21.526266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 15:25:21.567323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 15:25:21.567395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0723 15:25:24.049595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 15:38:22 embed-certs-486436 kubelet[3910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:38:22 embed-certs-486436 kubelet[3910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:38:33 embed-certs-486436 kubelet[3910]: E0723 15:38:33.960070    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:38:44 embed-certs-486436 kubelet[3910]: E0723 15:38:44.960052    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:38:55 embed-certs-486436 kubelet[3910]: E0723 15:38:55.959305    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:39:10 embed-certs-486436 kubelet[3910]: E0723 15:39:10.960107    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:39:22 embed-certs-486436 kubelet[3910]: E0723 15:39:22.973557    3910 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:39:22 embed-certs-486436 kubelet[3910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:39:22 embed-certs-486436 kubelet[3910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:39:22 embed-certs-486436 kubelet[3910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:39:22 embed-certs-486436 kubelet[3910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:39:25 embed-certs-486436 kubelet[3910]: E0723 15:39:25.959413    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:39:36 embed-certs-486436 kubelet[3910]: E0723 15:39:36.960828    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:39:50 embed-certs-486436 kubelet[3910]: E0723 15:39:50.961907    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:40:01 embed-certs-486436 kubelet[3910]: E0723 15:40:01.960046    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:40:15 embed-certs-486436 kubelet[3910]: E0723 15:40:15.960517    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:40:22 embed-certs-486436 kubelet[3910]: E0723 15:40:22.974880    3910 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:40:22 embed-certs-486436 kubelet[3910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:40:22 embed-certs-486436 kubelet[3910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:40:22 embed-certs-486436 kubelet[3910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:40:22 embed-certs-486436 kubelet[3910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:40:30 embed-certs-486436 kubelet[3910]: E0723 15:40:30.959974    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:40:44 embed-certs-486436 kubelet[3910]: E0723 15:40:44.960018    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:40:59 embed-certs-486436 kubelet[3910]: E0723 15:40:59.961205    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	Jul 23 15:41:14 embed-certs-486436 kubelet[3910]: E0723 15:41:14.959923    3910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7l2jw" podUID="d7796159-5366-4909-b019-84a0f104667f"
	
	
	==> storage-provisioner [2df1371fcdf7160c5e33ca044855b02ad4e8a0573f30518d25c6b0e16b5ee895] <==
	I0723 15:25:38.599322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 15:25:38.615738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 15:25:38.615848       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 15:25:38.625001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 15:25:38.627959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-486436_c052ae9f-7be6-4d77-b6ec-28b68b200921!
	I0723 15:25:38.630593       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51c2b8cb-8e74-45ca-81fa-08ae25bfe6af", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-486436_c052ae9f-7be6-4d77-b6ec-28b68b200921 became leader
	I0723 15:25:38.728986       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-486436_c052ae9f-7be6-4d77-b6ec-28b68b200921!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-486436 -n embed-certs-486436
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-486436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-7l2jw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-486436 describe pod metrics-server-569cc877fc-7l2jw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-486436 describe pod metrics-server-569cc877fc-7l2jw: exit status 1 (57.829215ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-7l2jw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-486436 describe pod metrics-server-569cc877fc-7l2jw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (395.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-23 15:43:47.239661139 +0000 UTC m=+6436.445405866
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-911217 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (70.074733ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-911217 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-911217 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-911217 logs -n 25: (1.306449244s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo cat                           | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo cat                           | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo cat                           | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo docker                        | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo cat                           | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo cat                           | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo cat                           | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo cat                           | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo                               | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo find                          | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-562147 sudo crio                          | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-562147                                    | kindnet-562147            | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	| start   | -p enable-default-cni-562147                         | enable-default-cni-562147 | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-562147 pgrep -a                            | calico-562147             | jenkins | v1.33.1 | 23 Jul 24 15:43 UTC | 23 Jul 24 15:43 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:43:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:43:20.993097   77957 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:43:20.993387   77957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:43:20.993398   77957 out.go:304] Setting ErrFile to fd 2...
	I0723 15:43:20.993403   77957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:43:20.993627   77957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:43:20.994227   77957 out.go:298] Setting JSON to false
	I0723 15:43:20.995381   77957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8747,"bootTime":1721740654,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:43:20.995443   77957 start.go:139] virtualization: kvm guest
	I0723 15:43:20.997902   77957 out.go:177] * [enable-default-cni-562147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:43:20.999440   77957 notify.go:220] Checking for updates...
	I0723 15:43:20.999458   77957 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:43:21.000843   77957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:43:21.002397   77957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:43:21.003908   77957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:43:21.005241   77957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:43:21.006555   77957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:43:21.008276   77957 config.go:182] Loaded profile config "calico-562147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:43:21.008396   77957 config.go:182] Loaded profile config "custom-flannel-562147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:43:21.008486   77957 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:43:21.008614   77957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:43:21.049570   77957 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 15:43:21.050927   77957 start.go:297] selected driver: kvm2
	I0723 15:43:21.050943   77957 start.go:901] validating driver "kvm2" against <nil>
	I0723 15:43:21.050956   77957 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:43:21.051719   77957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:43:21.051803   77957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:43:21.069201   77957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:43:21.069270   77957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0723 15:43:21.069500   77957 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0723 15:43:21.069528   77957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:43:21.069601   77957 cni.go:84] Creating CNI manager for "bridge"
	I0723 15:43:21.069620   77957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 15:43:21.069682   77957 start.go:340] cluster config:
	{Name:enable-default-cni-562147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-562147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:43:21.069803   77957 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:43:21.071422   77957 out.go:177] * Starting "enable-default-cni-562147" primary control-plane node in "enable-default-cni-562147" cluster
	I0723 15:43:19.621303   74197 pod_ready.go:102] pod "calico-kube-controllers-564985c589-kdchk" in "kube-system" namespace has status "Ready":"False"
	I0723 15:43:22.118524   74197 pod_ready.go:102] pod "calico-kube-controllers-564985c589-kdchk" in "kube-system" namespace has status "Ready":"False"
	I0723 15:43:20.748658   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:20.749068   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | unable to find current IP address of domain custom-flannel-562147 in network mk-custom-flannel-562147
	I0723 15:43:20.749097   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | I0723 15:43:20.749041   76578 retry.go:31] will retry after 1.904110874s: waiting for machine to come up
	I0723 15:43:22.656361   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:22.656869   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | unable to find current IP address of domain custom-flannel-562147 in network mk-custom-flannel-562147
	I0723 15:43:22.656896   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | I0723 15:43:22.656827   76578 retry.go:31] will retry after 3.424023066s: waiting for machine to come up
	I0723 15:43:21.072538   77957 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:43:21.072567   77957 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:43:21.072576   77957 cache.go:56] Caching tarball of preloaded images
	I0723 15:43:21.072659   77957 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:43:21.072675   77957 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:43:21.072764   77957 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/enable-default-cni-562147/config.json ...
	I0723 15:43:21.072787   77957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/enable-default-cni-562147/config.json: {Name:mkb5a4df9e027c5154b4a02cfb2683c84b3bdd0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:21.072931   77957 start.go:360] acquireMachinesLock for enable-default-cni-562147: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:43:23.117382   74197 pod_ready.go:92] pod "calico-kube-controllers-564985c589-kdchk" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:23.117404   74197 pod_ready.go:81] duration metric: took 20.505311776s for pod "calico-kube-controllers-564985c589-kdchk" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:23.117412   74197 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-nmd7m" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.623857   74197 pod_ready.go:92] pod "calico-node-nmd7m" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:24.623885   74197 pod_ready.go:81] duration metric: took 1.506464945s for pod "calico-node-nmd7m" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.623897   74197 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-f8hzd" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.628727   74197 pod_ready.go:92] pod "coredns-7db6d8ff4d-f8hzd" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:24.628746   74197 pod_ready.go:81] duration metric: took 4.841994ms for pod "coredns-7db6d8ff4d-f8hzd" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.628754   74197 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-mbctb" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.633092   74197 pod_ready.go:92] pod "coredns-7db6d8ff4d-mbctb" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:24.633111   74197 pod_ready.go:81] duration metric: took 4.350239ms for pod "coredns-7db6d8ff4d-mbctb" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.633122   74197 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.637043   74197 pod_ready.go:92] pod "etcd-calico-562147" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:24.637073   74197 pod_ready.go:81] duration metric: took 3.932781ms for pod "etcd-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.637093   74197 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.716327   74197 pod_ready.go:92] pod "kube-apiserver-calico-562147" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:24.716348   74197 pod_ready.go:81] duration metric: took 79.248059ms for pod "kube-apiserver-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:24.716356   74197 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:25.116576   74197 pod_ready.go:92] pod "kube-controller-manager-calico-562147" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:25.116604   74197 pod_ready.go:81] duration metric: took 400.24008ms for pod "kube-controller-manager-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:25.116618   74197 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-z9vv9" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:25.515410   74197 pod_ready.go:92] pod "kube-proxy-z9vv9" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:25.515433   74197 pod_ready.go:81] duration metric: took 398.808551ms for pod "kube-proxy-z9vv9" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:25.515464   74197 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:25.917163   74197 pod_ready.go:92] pod "kube-scheduler-calico-562147" in "kube-system" namespace has status "Ready":"True"
	I0723 15:43:25.917185   74197 pod_ready.go:81] duration metric: took 401.714126ms for pod "kube-scheduler-calico-562147" in "kube-system" namespace to be "Ready" ...
	I0723 15:43:25.917195   74197 pod_ready.go:38] duration metric: took 23.317513377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:43:25.917209   74197 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:43:25.917255   74197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:43:25.933473   74197 api_server.go:72] duration metric: took 33.026678047s to wait for apiserver process to appear ...
	I0723 15:43:25.933500   74197 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:43:25.933522   74197 api_server.go:253] Checking apiserver healthz at https://192.168.50.180:8443/healthz ...
	I0723 15:43:25.937851   74197 api_server.go:279] https://192.168.50.180:8443/healthz returned 200:
	ok
	I0723 15:43:25.938820   74197 api_server.go:141] control plane version: v1.30.3
	I0723 15:43:25.938841   74197 api_server.go:131] duration metric: took 5.333951ms to wait for apiserver health ...
	I0723 15:43:25.938848   74197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:43:26.121483   74197 system_pods.go:59] 10 kube-system pods found
	I0723 15:43:26.121519   74197 system_pods.go:61] "calico-kube-controllers-564985c589-kdchk" [55b38be0-d531-4575-b08f-46e409195ab4] Running
	I0723 15:43:26.121526   74197 system_pods.go:61] "calico-node-nmd7m" [18857bbd-5347-4390-aa14-ddfbba1cd2a8] Running
	I0723 15:43:26.121531   74197 system_pods.go:61] "coredns-7db6d8ff4d-f8hzd" [3d3a870f-1b11-408a-8fa1-4c3d03bc8471] Running
	I0723 15:43:26.121535   74197 system_pods.go:61] "coredns-7db6d8ff4d-mbctb" [aa4296f9-01c9-49a2-addf-6bad06f60b74] Running
	I0723 15:43:26.121539   74197 system_pods.go:61] "etcd-calico-562147" [67f827b9-dd43-481e-9565-7efe5a817534] Running
	I0723 15:43:26.121546   74197 system_pods.go:61] "kube-apiserver-calico-562147" [da767fa3-073d-4c0f-914f-635dd2fb7e14] Running
	I0723 15:43:26.121550   74197 system_pods.go:61] "kube-controller-manager-calico-562147" [e768a783-15a5-4fb6-b2ff-552652daa3a3] Running
	I0723 15:43:26.121555   74197 system_pods.go:61] "kube-proxy-z9vv9" [b4a97a5f-b23d-4b7c-ba5f-5e754722b79a] Running
	I0723 15:43:26.121559   74197 system_pods.go:61] "kube-scheduler-calico-562147" [bd801592-c91d-4102-8b95-8dd9676d5f1b] Running
	I0723 15:43:26.121563   74197 system_pods.go:61] "storage-provisioner" [9ba208c4-115f-420d-918c-3e8bf83b4737] Running
	I0723 15:43:26.121571   74197 system_pods.go:74] duration metric: took 182.716991ms to wait for pod list to return data ...
	I0723 15:43:26.121584   74197 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:43:26.315790   74197 default_sa.go:45] found service account: "default"
	I0723 15:43:26.315813   74197 default_sa.go:55] duration metric: took 194.222683ms for default service account to be created ...
	I0723 15:43:26.315822   74197 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:43:26.521314   74197 system_pods.go:86] 10 kube-system pods found
	I0723 15:43:26.521344   74197 system_pods.go:89] "calico-kube-controllers-564985c589-kdchk" [55b38be0-d531-4575-b08f-46e409195ab4] Running
	I0723 15:43:26.521350   74197 system_pods.go:89] "calico-node-nmd7m" [18857bbd-5347-4390-aa14-ddfbba1cd2a8] Running
	I0723 15:43:26.521354   74197 system_pods.go:89] "coredns-7db6d8ff4d-f8hzd" [3d3a870f-1b11-408a-8fa1-4c3d03bc8471] Running
	I0723 15:43:26.521358   74197 system_pods.go:89] "coredns-7db6d8ff4d-mbctb" [aa4296f9-01c9-49a2-addf-6bad06f60b74] Running
	I0723 15:43:26.521362   74197 system_pods.go:89] "etcd-calico-562147" [67f827b9-dd43-481e-9565-7efe5a817534] Running
	I0723 15:43:26.521365   74197 system_pods.go:89] "kube-apiserver-calico-562147" [da767fa3-073d-4c0f-914f-635dd2fb7e14] Running
	I0723 15:43:26.521379   74197 system_pods.go:89] "kube-controller-manager-calico-562147" [e768a783-15a5-4fb6-b2ff-552652daa3a3] Running
	I0723 15:43:26.521386   74197 system_pods.go:89] "kube-proxy-z9vv9" [b4a97a5f-b23d-4b7c-ba5f-5e754722b79a] Running
	I0723 15:43:26.521389   74197 system_pods.go:89] "kube-scheduler-calico-562147" [bd801592-c91d-4102-8b95-8dd9676d5f1b] Running
	I0723 15:43:26.521393   74197 system_pods.go:89] "storage-provisioner" [9ba208c4-115f-420d-918c-3e8bf83b4737] Running
	I0723 15:43:26.521400   74197 system_pods.go:126] duration metric: took 205.572769ms to wait for k8s-apps to be running ...
	I0723 15:43:26.521410   74197 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:43:26.521448   74197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:43:26.538705   74197 system_svc.go:56] duration metric: took 17.28561ms WaitForService to wait for kubelet
	I0723 15:43:26.538731   74197 kubeadm.go:582] duration metric: took 33.631942996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:43:26.538748   74197 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:43:26.716163   74197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:43:26.716190   74197 node_conditions.go:123] node cpu capacity is 2
	I0723 15:43:26.716201   74197 node_conditions.go:105] duration metric: took 177.44943ms to run NodePressure ...
	I0723 15:43:26.716232   74197 start.go:241] waiting for startup goroutines ...
	I0723 15:43:26.716240   74197 start.go:246] waiting for cluster config update ...
	I0723 15:43:26.716249   74197 start.go:255] writing updated cluster config ...
	I0723 15:43:26.716478   74197 ssh_runner.go:195] Run: rm -f paused
	I0723 15:43:26.762736   74197 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:43:26.764843   74197 out.go:177] * Done! kubectl is now configured to use "calico-562147" cluster and "default" namespace by default
	I0723 15:43:26.082674   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:26.083208   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | unable to find current IP address of domain custom-flannel-562147 in network mk-custom-flannel-562147
	I0723 15:43:26.083231   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | I0723 15:43:26.083169   76578 retry.go:31] will retry after 3.84003413s: waiting for machine to come up
	I0723 15:43:29.924818   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:29.925297   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | unable to find current IP address of domain custom-flannel-562147 in network mk-custom-flannel-562147
	I0723 15:43:29.925320   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | I0723 15:43:29.925242   76578 retry.go:31] will retry after 3.667797739s: waiting for machine to come up
	I0723 15:43:33.595440   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:33.596063   76555 main.go:141] libmachine: (custom-flannel-562147) Found IP for machine: 192.168.72.32
	I0723 15:43:33.596080   76555 main.go:141] libmachine: (custom-flannel-562147) Reserving static IP address...
	I0723 15:43:33.596095   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has current primary IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:33.596453   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | unable to find host DHCP lease matching {name: "custom-flannel-562147", mac: "52:54:00:d9:a3:ba", ip: "192.168.72.32"} in network mk-custom-flannel-562147
	I0723 15:43:33.671065   76555 main.go:141] libmachine: (custom-flannel-562147) Reserved static IP address: 192.168.72.32
	I0723 15:43:33.671091   76555 main.go:141] libmachine: (custom-flannel-562147) Waiting for SSH to be available...
	I0723 15:43:33.671101   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Getting to WaitForSSH function...
	I0723 15:43:33.674421   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:33.674858   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147
	I0723 15:43:33.674886   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | unable to find defined IP address of network mk-custom-flannel-562147 interface with MAC address 52:54:00:d9:a3:ba
	I0723 15:43:33.675039   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Using SSH client type: external
	I0723 15:43:33.675059   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa (-rw-------)
	I0723 15:43:33.675086   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:43:33.675095   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | About to run SSH command:
	I0723 15:43:33.675107   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | exit 0
	I0723 15:43:33.678807   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | SSH cmd err, output: exit status 255: 
	I0723 15:43:33.678835   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0723 15:43:33.678848   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | command : exit 0
	I0723 15:43:33.678860   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | err     : exit status 255
	I0723 15:43:33.678888   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | output  : 
	I0723 15:43:38.159129   77957 start.go:364] duration metric: took 17.086170946s to acquireMachinesLock for "enable-default-cni-562147"
	I0723 15:43:38.159235   77957 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-562147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-562147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:43:38.159395   77957 start.go:125] createHost starting for "" (driver="kvm2")
	I0723 15:43:36.679410   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Getting to WaitForSSH function...
	I0723 15:43:36.682064   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:36.682586   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:36.682635   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:36.682827   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Using SSH client type: external
	I0723 15:43:36.682855   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa (-rw-------)
	I0723 15:43:36.682895   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:43:36.682915   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | About to run SSH command:
	I0723 15:43:36.682927   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | exit 0
	I0723 15:43:36.810902   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | SSH cmd err, output: <nil>: 
	I0723 15:43:36.811202   76555 main.go:141] libmachine: (custom-flannel-562147) KVM machine creation complete!
	I0723 15:43:36.811499   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetConfigRaw
	I0723 15:43:36.894825   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .DriverName
	I0723 15:43:36.895197   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .DriverName
	I0723 15:43:36.895376   76555 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0723 15:43:36.895397   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetState
	I0723 15:43:36.897085   76555 main.go:141] libmachine: Detecting operating system of created instance...
	I0723 15:43:36.897102   76555 main.go:141] libmachine: Waiting for SSH to be available...
	I0723 15:43:36.897114   76555 main.go:141] libmachine: Getting to WaitForSSH function...
	I0723 15:43:36.897124   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:36.899746   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:36.900143   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:36.900183   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:36.900282   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:36.900497   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:36.900678   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:36.900800   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:36.900968   76555 main.go:141] libmachine: Using SSH client type: native
	I0723 15:43:36.901211   76555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0723 15:43:36.901226   76555 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0723 15:43:37.009634   76555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:43:37.009655   76555 main.go:141] libmachine: Detecting the provisioner...
	I0723 15:43:37.009662   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:37.012300   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.012771   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.012800   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.012992   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:37.013235   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.013444   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.013614   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:37.013779   76555 main.go:141] libmachine: Using SSH client type: native
	I0723 15:43:37.014016   76555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0723 15:43:37.014040   76555 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0723 15:43:37.123500   76555 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0723 15:43:37.123576   76555 main.go:141] libmachine: found compatible host: buildroot
	I0723 15:43:37.123590   76555 main.go:141] libmachine: Provisioning with buildroot...
	I0723 15:43:37.123602   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetMachineName
	I0723 15:43:37.123867   76555 buildroot.go:166] provisioning hostname "custom-flannel-562147"
	I0723 15:43:37.123897   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetMachineName
	I0723 15:43:37.124064   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:37.126649   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.127455   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.127479   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.127732   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:37.127915   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.128072   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.128206   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:37.128397   76555 main.go:141] libmachine: Using SSH client type: native
	I0723 15:43:37.128556   76555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0723 15:43:37.128568   76555 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-562147 && echo "custom-flannel-562147" | sudo tee /etc/hostname
	I0723 15:43:37.254174   76555 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-562147
	
	I0723 15:43:37.254202   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:37.256956   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.257369   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.257396   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.257592   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:37.257768   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.257901   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.258021   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:37.258252   76555 main.go:141] libmachine: Using SSH client type: native
	I0723 15:43:37.258469   76555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0723 15:43:37.258496   76555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-562147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-562147/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-562147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:43:37.383010   76555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:43:37.383041   76555 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:43:37.383084   76555 buildroot.go:174] setting up certificates
	I0723 15:43:37.383095   76555 provision.go:84] configureAuth start
	I0723 15:43:37.383110   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetMachineName
	I0723 15:43:37.383396   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetIP
	I0723 15:43:37.386056   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.386432   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.386462   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.386623   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:37.388913   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.389324   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.389345   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.389575   76555 provision.go:143] copyHostCerts
	I0723 15:43:37.389625   76555 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:43:37.389634   76555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:43:37.389711   76555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:43:37.389818   76555 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:43:37.389830   76555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:43:37.389874   76555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:43:37.389942   76555 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:43:37.389952   76555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:43:37.389983   76555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:43:37.390049   76555 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-562147 san=[127.0.0.1 192.168.72.32 custom-flannel-562147 localhost minikube]
	I0723 15:43:37.479829   76555 provision.go:177] copyRemoteCerts
	I0723 15:43:37.479881   76555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:43:37.479901   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:37.482864   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.483238   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.483274   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.483490   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:37.483690   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.483861   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:37.484001   76555 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa Username:docker}
	I0723 15:43:37.569865   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:43:37.595868   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0723 15:43:37.618841   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:43:37.643544   76555 provision.go:87] duration metric: took 260.430554ms to configureAuth
	I0723 15:43:37.643574   76555 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:43:37.643718   76555 config.go:182] Loaded profile config "custom-flannel-562147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:43:37.643791   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:37.646662   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.647023   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.647046   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.647320   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:37.647525   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.647726   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.647904   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:37.648088   76555 main.go:141] libmachine: Using SSH client type: native
	I0723 15:43:37.648321   76555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0723 15:43:37.648358   76555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:43:37.911876   76555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:43:37.911908   76555 main.go:141] libmachine: Checking connection to Docker...
	I0723 15:43:37.911916   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetURL
	I0723 15:43:37.913131   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | Using libvirt version 6000000
	I0723 15:43:37.916428   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.916820   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.916879   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.917058   76555 main.go:141] libmachine: Docker is up and running!
	I0723 15:43:37.917071   76555 main.go:141] libmachine: Reticulating splines...
	I0723 15:43:37.917078   76555 client.go:171] duration metric: took 28.10911304s to LocalClient.Create
	I0723 15:43:37.917100   76555 start.go:167] duration metric: took 28.109178048s to libmachine.API.Create "custom-flannel-562147"
	I0723 15:43:37.917111   76555 start.go:293] postStartSetup for "custom-flannel-562147" (driver="kvm2")
	I0723 15:43:37.917120   76555 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:43:37.917135   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .DriverName
	I0723 15:43:37.917420   76555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:43:37.917447   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:37.919548   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.919868   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:37.919886   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:37.920072   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:37.920305   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:37.920481   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:37.920661   76555 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa Username:docker}
	I0723 15:43:38.005506   76555 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:43:38.009556   76555 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:43:38.009580   76555 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:43:38.009653   76555 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:43:38.009747   76555 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:43:38.009859   76555 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:43:38.019637   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:43:38.042184   76555 start.go:296] duration metric: took 125.060427ms for postStartSetup
	I0723 15:43:38.042245   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetConfigRaw
	I0723 15:43:38.042841   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetIP
	I0723 15:43:38.045380   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.045718   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:38.045747   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.045966   76555 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/config.json ...
	I0723 15:43:38.046203   76555 start.go:128] duration metric: took 28.260182222s to createHost
	I0723 15:43:38.046230   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:38.048636   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.049113   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:38.049163   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.049287   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:38.049514   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:38.049675   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:38.049819   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:38.049954   76555 main.go:141] libmachine: Using SSH client type: native
	I0723 15:43:38.050101   76555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0723 15:43:38.050110   76555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:43:38.158932   76555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721749418.136111512
	
	I0723 15:43:38.158965   76555 fix.go:216] guest clock: 1721749418.136111512
	I0723 15:43:38.158977   76555 fix.go:229] Guest: 2024-07-23 15:43:38.136111512 +0000 UTC Remote: 2024-07-23 15:43:38.046216683 +0000 UTC m=+28.385479400 (delta=89.894829ms)
	I0723 15:43:38.159003   76555 fix.go:200] guest clock delta is within tolerance: 89.894829ms
	I0723 15:43:38.159010   76555 start.go:83] releasing machines lock for "custom-flannel-562147", held for 28.373081173s
	I0723 15:43:38.159042   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .DriverName
	I0723 15:43:38.159332   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetIP
	I0723 15:43:38.162610   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.163004   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:38.163055   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.163132   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .DriverName
	I0723 15:43:38.163604   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .DriverName
	I0723 15:43:38.163821   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .DriverName
	I0723 15:43:38.163909   76555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:43:38.163956   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:38.164015   76555 ssh_runner.go:195] Run: cat /version.json
	I0723 15:43:38.164038   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHHostname
	I0723 15:43:38.167017   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.167249   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.167504   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:38.167555   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.167642   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:38.167839   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:38.167857   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:38.167893   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:38.168150   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHPort
	I0723 15:43:38.168174   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:38.168326   76555 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa Username:docker}
	I0723 15:43:38.168349   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHKeyPath
	I0723 15:43:38.168498   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetSSHUsername
	I0723 15:43:38.168672   76555 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/custom-flannel-562147/id_rsa Username:docker}
	I0723 15:43:38.255865   76555 ssh_runner.go:195] Run: systemctl --version
	I0723 15:43:38.286067   76555 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:43:38.447569   76555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:43:38.453395   76555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:43:38.453465   76555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:43:38.472170   76555 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:43:38.472197   76555 start.go:495] detecting cgroup driver to use...
	I0723 15:43:38.472268   76555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:43:38.488941   76555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:43:38.502733   76555 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:43:38.502798   76555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:43:38.516950   76555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:43:38.531490   76555 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:43:38.649660   76555 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:43:38.810598   76555 docker.go:233] disabling docker service ...
	I0723 15:43:38.810651   76555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:43:38.832074   76555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:43:38.844728   76555 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:43:38.983590   76555 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:43:39.108910   76555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:43:39.123093   76555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:43:39.140708   76555 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:43:39.140768   76555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:43:39.151220   76555 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:43:39.151283   76555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:43:39.162339   76555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:43:39.173109   76555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:43:39.183564   76555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:43:39.193714   76555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:43:39.203346   76555 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:43:39.220203   76555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:43:39.230929   76555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:43:39.240404   76555 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:43:39.240465   76555 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:43:39.254015   76555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:43:39.263422   76555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:43:39.382347   76555 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:43:39.540314   76555 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:43:39.540402   76555 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:43:39.545175   76555 start.go:563] Will wait 60s for crictl version
	I0723 15:43:39.545229   76555 ssh_runner.go:195] Run: which crictl
	I0723 15:43:39.549057   76555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:43:39.589249   76555 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:43:39.589334   76555 ssh_runner.go:195] Run: crio --version
	I0723 15:43:39.623150   76555 ssh_runner.go:195] Run: crio --version
	I0723 15:43:39.656409   76555 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:43:39.657753   76555 main.go:141] libmachine: (custom-flannel-562147) Calling .GetIP
	I0723 15:43:39.660840   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:39.661265   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a3:ba", ip: ""} in network mk-custom-flannel-562147: {Iface:virbr2 ExpiryTime:2024-07-23 16:43:25 +0000 UTC Type:0 Mac:52:54:00:d9:a3:ba Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:custom-flannel-562147 Clientid:01:52:54:00:d9:a3:ba}
	I0723 15:43:39.661300   76555 main.go:141] libmachine: (custom-flannel-562147) DBG | domain custom-flannel-562147 has defined IP address 192.168.72.32 and MAC address 52:54:00:d9:a3:ba in network mk-custom-flannel-562147
	I0723 15:43:39.661471   76555 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0723 15:43:39.666109   76555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:43:39.680951   76555 kubeadm.go:883] updating cluster {Name:custom-flannel-562147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:custom-flannel-562147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:43:39.681073   76555 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:43:39.681138   76555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:43:38.161401   77957 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0723 15:43:38.161599   77957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:43:38.161641   77957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:43:38.180886   77957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0723 15:43:38.181375   77957 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:43:38.181960   77957 main.go:141] libmachine: Using API Version  1
	I0723 15:43:38.182000   77957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:43:38.182351   77957 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:43:38.182571   77957 main.go:141] libmachine: (enable-default-cni-562147) Calling .GetMachineName
	I0723 15:43:38.182723   77957 main.go:141] libmachine: (enable-default-cni-562147) Calling .DriverName
	I0723 15:43:38.182880   77957 start.go:159] libmachine.API.Create for "enable-default-cni-562147" (driver="kvm2")
	I0723 15:43:38.182900   77957 client.go:168] LocalClient.Create starting
	I0723 15:43:38.182932   77957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem
	I0723 15:43:38.182970   77957 main.go:141] libmachine: Decoding PEM data...
	I0723 15:43:38.182984   77957 main.go:141] libmachine: Parsing certificate...
	I0723 15:43:38.183052   77957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem
	I0723 15:43:38.183067   77957 main.go:141] libmachine: Decoding PEM data...
	I0723 15:43:38.183075   77957 main.go:141] libmachine: Parsing certificate...
	I0723 15:43:38.183090   77957 main.go:141] libmachine: Running pre-create checks...
	I0723 15:43:38.183097   77957 main.go:141] libmachine: (enable-default-cni-562147) Calling .PreCreateCheck
	I0723 15:43:38.183482   77957 main.go:141] libmachine: (enable-default-cni-562147) Calling .GetConfigRaw
	I0723 15:43:38.184081   77957 main.go:141] libmachine: Creating machine...
	I0723 15:43:38.184092   77957 main.go:141] libmachine: (enable-default-cni-562147) Calling .Create
	I0723 15:43:38.184241   77957 main.go:141] libmachine: (enable-default-cni-562147) Creating KVM machine...
	I0723 15:43:38.185575   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | found existing default KVM network
	I0723 15:43:38.187542   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:38.187350   78150 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204bf0}
	I0723 15:43:38.187569   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | created network xml: 
	I0723 15:43:38.187584   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | <network>
	I0723 15:43:38.187902   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |   <name>mk-enable-default-cni-562147</name>
	I0723 15:43:38.187919   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |   <dns enable='no'/>
	I0723 15:43:38.187927   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |   
	I0723 15:43:38.187943   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0723 15:43:38.187954   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |     <dhcp>
	I0723 15:43:38.187964   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0723 15:43:38.187979   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |     </dhcp>
	I0723 15:43:38.187988   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |   </ip>
	I0723 15:43:38.187994   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG |   
	I0723 15:43:38.188002   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | </network>
	I0723 15:43:38.188008   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | 
	I0723 15:43:38.194021   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | trying to create private KVM network mk-enable-default-cni-562147 192.168.39.0/24...
	I0723 15:43:38.280993   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | private KVM network mk-enable-default-cni-562147 192.168.39.0/24 created
	I0723 15:43:38.281030   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:38.280930   78150 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:43:38.281047   77957 main.go:141] libmachine: (enable-default-cni-562147) Setting up store path in /home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147 ...
	I0723 15:43:38.281071   77957 main.go:141] libmachine: (enable-default-cni-562147) Building disk image from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 15:43:38.281088   77957 main.go:141] libmachine: (enable-default-cni-562147) Downloading /home/jenkins/minikube-integration/19319-11303/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0723 15:43:38.531787   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:38.531674   78150 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147/id_rsa...
	I0723 15:43:38.781508   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:38.781367   78150 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147/enable-default-cni-562147.rawdisk...
	I0723 15:43:38.781536   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Writing magic tar header
	I0723 15:43:38.781551   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Writing SSH key tar header
	I0723 15:43:38.781562   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:38.781496   78150 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147 ...
	I0723 15:43:38.781575   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147
	I0723 15:43:38.781668   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube/machines
	I0723 15:43:38.781714   77957 main.go:141] libmachine: (enable-default-cni-562147) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147 (perms=drwx------)
	I0723 15:43:38.781735   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:43:38.781754   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19319-11303
	I0723 15:43:38.781767   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0723 15:43:38.781782   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Checking permissions on dir: /home/jenkins
	I0723 15:43:38.781795   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Checking permissions on dir: /home
	I0723 15:43:38.781819   77957 main.go:141] libmachine: (enable-default-cni-562147) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube/machines (perms=drwxr-xr-x)
	I0723 15:43:38.781841   77957 main.go:141] libmachine: (enable-default-cni-562147) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303/.minikube (perms=drwxr-xr-x)
	I0723 15:43:38.781853   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | Skipping /home - not owner
	I0723 15:43:38.781872   77957 main.go:141] libmachine: (enable-default-cni-562147) Setting executable bit set on /home/jenkins/minikube-integration/19319-11303 (perms=drwxrwxr-x)
	I0723 15:43:38.781889   77957 main.go:141] libmachine: (enable-default-cni-562147) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0723 15:43:38.781903   77957 main.go:141] libmachine: (enable-default-cni-562147) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0723 15:43:38.781918   77957 main.go:141] libmachine: (enable-default-cni-562147) Creating domain...
	I0723 15:43:38.782991   77957 main.go:141] libmachine: (enable-default-cni-562147) define libvirt domain using xml: 
	I0723 15:43:38.783021   77957 main.go:141] libmachine: (enable-default-cni-562147) <domain type='kvm'>
	I0723 15:43:38.783034   77957 main.go:141] libmachine: (enable-default-cni-562147)   <name>enable-default-cni-562147</name>
	I0723 15:43:38.783047   77957 main.go:141] libmachine: (enable-default-cni-562147)   <memory unit='MiB'>3072</memory>
	I0723 15:43:38.783059   77957 main.go:141] libmachine: (enable-default-cni-562147)   <vcpu>2</vcpu>
	I0723 15:43:38.783067   77957 main.go:141] libmachine: (enable-default-cni-562147)   <features>
	I0723 15:43:38.783098   77957 main.go:141] libmachine: (enable-default-cni-562147)     <acpi/>
	I0723 15:43:38.783117   77957 main.go:141] libmachine: (enable-default-cni-562147)     <apic/>
	I0723 15:43:38.783132   77957 main.go:141] libmachine: (enable-default-cni-562147)     <pae/>
	I0723 15:43:38.783145   77957 main.go:141] libmachine: (enable-default-cni-562147)     
	I0723 15:43:38.783164   77957 main.go:141] libmachine: (enable-default-cni-562147)   </features>
	I0723 15:43:38.783180   77957 main.go:141] libmachine: (enable-default-cni-562147)   <cpu mode='host-passthrough'>
	I0723 15:43:38.783191   77957 main.go:141] libmachine: (enable-default-cni-562147)   
	I0723 15:43:38.783201   77957 main.go:141] libmachine: (enable-default-cni-562147)   </cpu>
	I0723 15:43:38.783213   77957 main.go:141] libmachine: (enable-default-cni-562147)   <os>
	I0723 15:43:38.783221   77957 main.go:141] libmachine: (enable-default-cni-562147)     <type>hvm</type>
	I0723 15:43:38.783233   77957 main.go:141] libmachine: (enable-default-cni-562147)     <boot dev='cdrom'/>
	I0723 15:43:38.783244   77957 main.go:141] libmachine: (enable-default-cni-562147)     <boot dev='hd'/>
	I0723 15:43:38.783253   77957 main.go:141] libmachine: (enable-default-cni-562147)     <bootmenu enable='no'/>
	I0723 15:43:38.783262   77957 main.go:141] libmachine: (enable-default-cni-562147)   </os>
	I0723 15:43:38.783271   77957 main.go:141] libmachine: (enable-default-cni-562147)   <devices>
	I0723 15:43:38.783282   77957 main.go:141] libmachine: (enable-default-cni-562147)     <disk type='file' device='cdrom'>
	I0723 15:43:38.783304   77957 main.go:141] libmachine: (enable-default-cni-562147)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147/boot2docker.iso'/>
	I0723 15:43:38.783318   77957 main.go:141] libmachine: (enable-default-cni-562147)       <target dev='hdc' bus='scsi'/>
	I0723 15:43:38.783333   77957 main.go:141] libmachine: (enable-default-cni-562147)       <readonly/>
	I0723 15:43:38.783344   77957 main.go:141] libmachine: (enable-default-cni-562147)     </disk>
	I0723 15:43:38.783355   77957 main.go:141] libmachine: (enable-default-cni-562147)     <disk type='file' device='disk'>
	I0723 15:43:38.783376   77957 main.go:141] libmachine: (enable-default-cni-562147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0723 15:43:38.783397   77957 main.go:141] libmachine: (enable-default-cni-562147)       <source file='/home/jenkins/minikube-integration/19319-11303/.minikube/machines/enable-default-cni-562147/enable-default-cni-562147.rawdisk'/>
	I0723 15:43:38.783409   77957 main.go:141] libmachine: (enable-default-cni-562147)       <target dev='hda' bus='virtio'/>
	I0723 15:43:38.783420   77957 main.go:141] libmachine: (enable-default-cni-562147)     </disk>
	I0723 15:43:38.783430   77957 main.go:141] libmachine: (enable-default-cni-562147)     <interface type='network'>
	I0723 15:43:38.783442   77957 main.go:141] libmachine: (enable-default-cni-562147)       <source network='mk-enable-default-cni-562147'/>
	I0723 15:43:38.783454   77957 main.go:141] libmachine: (enable-default-cni-562147)       <model type='virtio'/>
	I0723 15:43:38.783465   77957 main.go:141] libmachine: (enable-default-cni-562147)     </interface>
	I0723 15:43:38.783480   77957 main.go:141] libmachine: (enable-default-cni-562147)     <interface type='network'>
	I0723 15:43:38.783497   77957 main.go:141] libmachine: (enable-default-cni-562147)       <source network='default'/>
	I0723 15:43:38.783510   77957 main.go:141] libmachine: (enable-default-cni-562147)       <model type='virtio'/>
	I0723 15:43:38.783521   77957 main.go:141] libmachine: (enable-default-cni-562147)     </interface>
	I0723 15:43:38.783535   77957 main.go:141] libmachine: (enable-default-cni-562147)     <serial type='pty'>
	I0723 15:43:38.783545   77957 main.go:141] libmachine: (enable-default-cni-562147)       <target port='0'/>
	I0723 15:43:38.783561   77957 main.go:141] libmachine: (enable-default-cni-562147)     </serial>
	I0723 15:43:38.783576   77957 main.go:141] libmachine: (enable-default-cni-562147)     <console type='pty'>
	I0723 15:43:38.783589   77957 main.go:141] libmachine: (enable-default-cni-562147)       <target type='serial' port='0'/>
	I0723 15:43:38.783600   77957 main.go:141] libmachine: (enable-default-cni-562147)     </console>
	I0723 15:43:38.783610   77957 main.go:141] libmachine: (enable-default-cni-562147)     <rng model='virtio'>
	I0723 15:43:38.783619   77957 main.go:141] libmachine: (enable-default-cni-562147)       <backend model='random'>/dev/random</backend>
	I0723 15:43:38.783628   77957 main.go:141] libmachine: (enable-default-cni-562147)     </rng>
	I0723 15:43:38.783638   77957 main.go:141] libmachine: (enable-default-cni-562147)     
	I0723 15:43:38.783657   77957 main.go:141] libmachine: (enable-default-cni-562147)     
	I0723 15:43:38.783677   77957 main.go:141] libmachine: (enable-default-cni-562147)   </devices>
	I0723 15:43:38.783690   77957 main.go:141] libmachine: (enable-default-cni-562147) </domain>
	I0723 15:43:38.783699   77957 main.go:141] libmachine: (enable-default-cni-562147) 
	I0723 15:43:38.788041   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:fd:68:58 in network default
	I0723 15:43:38.788699   77957 main.go:141] libmachine: (enable-default-cni-562147) Ensuring networks are active...
	I0723 15:43:38.788718   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:38.789321   77957 main.go:141] libmachine: (enable-default-cni-562147) Ensuring network default is active
	I0723 15:43:38.789673   77957 main.go:141] libmachine: (enable-default-cni-562147) Ensuring network mk-enable-default-cni-562147 is active
	I0723 15:43:38.790141   77957 main.go:141] libmachine: (enable-default-cni-562147) Getting domain xml...
	I0723 15:43:38.790909   77957 main.go:141] libmachine: (enable-default-cni-562147) Creating domain...
	I0723 15:43:40.206124   77957 main.go:141] libmachine: (enable-default-cni-562147) Waiting to get IP...
	I0723 15:43:40.207173   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:40.207746   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:40.207796   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:40.207731   78150 retry.go:31] will retry after 198.963241ms: waiting for machine to come up
	I0723 15:43:40.408498   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:40.409123   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:40.409145   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:40.409083   78150 retry.go:31] will retry after 362.141463ms: waiting for machine to come up
	I0723 15:43:40.772770   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:40.773572   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:40.773603   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:40.773541   78150 retry.go:31] will retry after 349.162137ms: waiting for machine to come up
	I0723 15:43:39.721380   76555 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:43:39.721444   76555 ssh_runner.go:195] Run: which lz4
	I0723 15:43:39.725389   76555 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:43:39.730267   76555 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:43:39.730333   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:43:41.070988   76555 crio.go:462] duration metric: took 1.345632308s to copy over tarball
	I0723 15:43:41.071072   76555 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:43:43.514610   76555 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.443502254s)
	I0723 15:43:43.514643   76555 crio.go:469] duration metric: took 2.443626261s to extract the tarball
	I0723 15:43:43.514653   76555 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:43:43.557577   76555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:43:43.601959   76555 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:43:43.601986   76555 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:43:43.601995   76555 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0723 15:43:43.602153   76555 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-562147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-562147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0723 15:43:43.602222   76555 ssh_runner.go:195] Run: crio config
	I0723 15:43:43.650176   76555 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0723 15:43:43.650207   76555 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:43:43.650229   76555 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-562147 NodeName:custom-flannel-562147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:43:43.650432   76555 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-562147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:43:43.650512   76555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:43:43.660394   76555 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:43:43.660455   76555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:43:43.669879   76555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0723 15:43:43.687083   76555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:43:43.702162   76555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0723 15:43:43.719209   76555 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0723 15:43:43.722765   76555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:43:43.733812   76555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:43:43.853078   76555 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:43:43.873327   76555 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147 for IP: 192.168.72.32
	I0723 15:43:43.873349   76555 certs.go:194] generating shared ca certs ...
	I0723 15:43:43.873364   76555 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:43.873500   76555 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:43:43.873539   76555 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:43:43.873550   76555 certs.go:256] generating profile certs ...
	I0723 15:43:43.873633   76555 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/client.key
	I0723 15:43:43.873652   76555 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/client.crt with IP's: []
	I0723 15:43:43.940057   76555 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/client.crt ...
	I0723 15:43:43.940086   76555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/client.crt: {Name:mk19f7d24a1b01c21132b8d6e6c6fb5dced1304f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:43.940258   76555 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/client.key ...
	I0723 15:43:43.940278   76555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/client.key: {Name:mk63a925d86cc1ceb077b0832ad25122e57bf585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:43.940396   76555 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.key.f295e132
	I0723 15:43:43.940415   76555 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.crt.f295e132 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.32]
	I0723 15:43:44.053699   76555 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.crt.f295e132 ...
	I0723 15:43:44.053726   76555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.crt.f295e132: {Name:mk9eaf82caedc0d0128c2e07debf018e8e01e580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:44.053882   76555 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.key.f295e132 ...
	I0723 15:43:44.053900   76555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.key.f295e132: {Name:mk6d8530d70282b214120b375c07b84d3a5b6671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:44.053990   76555 certs.go:381] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.crt.f295e132 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.crt
	I0723 15:43:44.054079   76555 certs.go:385] copying /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.key.f295e132 -> /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.key
	I0723 15:43:44.054159   76555 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.key
	I0723 15:43:44.054179   76555 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.crt with IP's: []
	I0723 15:43:44.326477   76555 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.crt ...
	I0723 15:43:44.326505   76555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.crt: {Name:mk4ba06e9df000f2699f81e338dd5d8fdc4f99b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:44.326708   76555 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.key ...
	I0723 15:43:44.326727   76555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.key: {Name:mk311a6945e21d35a8ed27a32cfd377aeb033d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:43:44.326986   76555 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:43:44.327032   76555 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:43:44.327048   76555 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:43:44.327089   76555 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:43:44.327124   76555 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:43:44.327159   76555 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:43:44.327217   76555 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:43:44.327880   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:43:44.358728   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:43:44.386959   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:43:44.417992   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:43:44.453275   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:43:44.486657   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:43:44.512593   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:43:44.543748   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/custom-flannel-562147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:43:44.571967   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:43:44.614374   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:43:44.648965   76555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:43:44.673533   76555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:43:44.691250   76555 ssh_runner.go:195] Run: openssl version
	I0723 15:43:44.697875   76555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:43:44.710160   76555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:43:44.714915   76555 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:43:44.714966   76555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:43:44.722228   76555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:43:44.734731   76555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:43:44.748247   76555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:43:44.752759   76555 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:43:44.752815   76555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:43:44.758806   76555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:43:44.769948   76555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:43:44.780727   76555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:43:44.786276   76555 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:43:44.786337   76555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:43:44.793848   76555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:43:44.808037   76555 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:43:44.813124   76555 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 15:43:44.813186   76555 kubeadm.go:392] StartCluster: {Name:custom-flannel-562147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:custom-flannel-562147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:43:44.813265   76555 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:43:44.813306   76555 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:43:44.863341   76555 cri.go:89] found id: ""
	I0723 15:43:44.863413   76555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:43:44.876644   76555 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:43:44.887872   76555 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:43:44.899056   76555 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:43:44.899074   76555 kubeadm.go:157] found existing configuration files:
	
	I0723 15:43:44.899106   76555 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:43:44.910844   76555 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:43:44.910900   76555 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:43:44.922860   76555 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:43:44.934455   76555 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:43:44.934518   76555 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:43:44.948260   76555 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:43:44.958605   76555 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:43:44.958674   76555 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:43:44.971007   76555 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:43:44.980325   76555 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:43:44.980389   76555 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:43:44.989335   76555 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:43:45.056204   76555 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 15:43:45.056256   76555 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:43:45.191616   76555 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:43:45.191796   76555 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:43:45.191955   76555 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:43:45.442483   76555 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:43:41.124243   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:41.124787   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:41.124817   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:41.124743   78150 retry.go:31] will retry after 524.422907ms: waiting for machine to come up
	I0723 15:43:41.650817   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:41.651406   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:41.651432   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:41.651358   78150 retry.go:31] will retry after 739.578058ms: waiting for machine to come up
	I0723 15:43:42.392561   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:42.393121   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:42.393148   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:42.393069   78150 retry.go:31] will retry after 698.051825ms: waiting for machine to come up
	I0723 15:43:43.092823   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:43.093574   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:43.093608   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:43.093462   78150 retry.go:31] will retry after 1.048589979s: waiting for machine to come up
	I0723 15:43:44.144161   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:44.144696   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:44.144727   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:44.144646   78150 retry.go:31] will retry after 1.318462056s: waiting for machine to come up
	I0723 15:43:45.465161   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | domain enable-default-cni-562147 has defined MAC address 52:54:00:46:8c:00 in network mk-enable-default-cni-562147
	I0723 15:43:45.465657   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | unable to find current IP address of domain enable-default-cni-562147 in network mk-enable-default-cni-562147
	I0723 15:43:45.465682   77957 main.go:141] libmachine: (enable-default-cni-562147) DBG | I0723 15:43:45.465606   78150 retry.go:31] will retry after 1.466301901s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.947455355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749427947423754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bda9b8b-4459-481c-8fc0-e3ac3f885718 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.948042078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2612c28b-9fbd-43c1-95dd-7326f60d55af name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.948097603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2612c28b-9fbd-43c1-95dd-7326f60d55af name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.948320809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2612c28b-9fbd-43c1-95dd-7326f60d55af name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.993874181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=960f29cf-e599-4b83-9c6f-84e133179e43 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.994042737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=960f29cf-e599-4b83-9c6f-84e133179e43 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.995503482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=881e9d6a-ddac-4782-86cb-82aec1f06a0d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.995980457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749427995955358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=881e9d6a-ddac-4782-86cb-82aec1f06a0d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.996426815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0ec937f-c82c-4e74-a618-070962a2b732 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.996497173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0ec937f-c82c-4e74-a618-070962a2b732 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:47 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:47.996720408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0ec937f-c82c-4e74-a618-070962a2b732 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.045625913Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55b2ecf5-5c91-4de6-8a39-f58b0025f65b name=/runtime.v1.RuntimeService/Version
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.045737760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55b2ecf5-5c91-4de6-8a39-f58b0025f65b name=/runtime.v1.RuntimeService/Version
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.046855472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5150f965-5b2a-4a86-92e1-756c53ff9dd7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.047442858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749428047410654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5150f965-5b2a-4a86-92e1-756c53ff9dd7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.048403239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5581916b-6311-4c43-84e5-1561204df0dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.048481447Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5581916b-6311-4c43-84e5-1561204df0dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.048780975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5581916b-6311-4c43-84e5-1561204df0dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.088233813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15f04551-ffdc-4ff6-97d9-d837bf5dabbf name=/runtime.v1.RuntimeService/Version
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.088357033Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15f04551-ffdc-4ff6-97d9-d837bf5dabbf name=/runtime.v1.RuntimeService/Version
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.090183320Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b6ea3ca-3e03-47a4-b337-df1d92042556 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.091899184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749428091855736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b6ea3ca-3e03-47a4-b337-df1d92042556 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.092598236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cd8e12c-7791-4ea3-be6b-ab652ef6ba5b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.092666318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cd8e12c-7791-4ea3-be6b-ab652ef6ba5b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:43:48 default-k8s-diff-port-911217 crio[724]: time="2024-07-23 15:43:48.092923147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748105840475027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d80bea625fdc2b0cdabc7e7039737e0ad37b0335db55ddccfd149449b4da18,PodSandboxId:78a22f7d4c71b550cbb21b935d61a905997f36d2ec3f623f6ecd568cad57cf48,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748085537681658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885,},Annotations:map[string]string{io.kubernetes.container.hash: b92acf39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344,PodSandboxId:b275707ca1bdcadb4bd0c6c25fcc12933ad1cf235e68fe3d3b713cc2ac7d98c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748082698204949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9qcfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663c125b-bed4-4622-8f0c-ff7837073bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 51b9a655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb,PodSandboxId:e085ea6e5fe2e316fac2f5fef3537adb9c34b3bbdb7dd5a7e6e3f1f39ae23b18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721748075042647595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55082c05-5
fee-4c2a-ab31-897d838164d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9e588327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab,PodSandboxId:ca8fdb1501073525255e5cf2602cee6dada8253097d34daa6a63aab4d666ab37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748075018492032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a893464-6a36-4a91-9dde
-8cb58d7dcfa8,},Annotations:map[string]string{io.kubernetes.container.hash: c3603b24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3,PodSandboxId:5b96d807e79249196d07707263792b44883aa5e720450f303729e0f88d907005,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721748070404106068,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3b8c85bbf0ed67c3c9
d628e2d961e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0,PodSandboxId:bbb44bef6c4ae156dc250c211a43d6734121bdb9c0a562ca7b1388f26ea81e75,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721748070327020409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429546dbaed2c01c11bb28a15be2d102,},Annotations:map[st
ring]string{io.kubernetes.container.hash: ba531085,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da,PodSandboxId:914b892d84f87609bacb25d3fceef6ceacba80e3aedf7ffa26fce57861b8381d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721748070296024639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc21cdd18d25fadf0e2d43494d5
ec86,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e,PodSandboxId:928ac961f34d10a798eb6fadb08a5ded5a056a81522ad815d9aae50f7fb6ee21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721748070280020625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-911217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0147c985073f7215a7c36182709521
e5,},Annotations:map[string]string{io.kubernetes.container.hash: d7649beb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cd8e12c-7791-4ea3-be6b-ab652ef6ba5b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68672c3e7b7b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   ca8fdb1501073       storage-provisioner
	b9d80bea625fd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   78a22f7d4c71b       busybox
	b58d38beb8d00       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   b275707ca1bdc       coredns-7db6d8ff4d-9qcfs
	48a478b951b42       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      22 minutes ago      Running             kube-proxy                1                   e085ea6e5fe2e       kube-proxy-d4zwd
	01a650a53706b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   ca8fdb1501073       storage-provisioner
	9ac0a72e37831       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      22 minutes ago      Running             kube-scheduler            1                   5b96d807e7924       kube-scheduler-default-k8s-diff-port-911217
	e73340ee36d2f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   bbb44bef6c4ae       etcd-default-k8s-diff-port-911217
	bcc1ca16d82a0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      22 minutes ago      Running             kube-controller-manager   1                   914b892d84f87       kube-controller-manager-default-k8s-diff-port-911217
	96e46e540ab2c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      22 minutes ago      Running             kube-apiserver            1                   928ac961f34d1       kube-apiserver-default-k8s-diff-port-911217
	
	
	==> coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35430 - 47338 "HINFO IN 3073176849920810953.3099087362000300018. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009793717s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-911217
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-911217
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=default-k8s-diff-port-911217
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_15_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:15:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-911217
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:43:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:42:09 +0000   Tue, 23 Jul 2024 15:15:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:42:09 +0000   Tue, 23 Jul 2024 15:15:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:42:09 +0000   Tue, 23 Jul 2024 15:15:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:42:09 +0000   Tue, 23 Jul 2024 15:21:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.64
	  Hostname:    default-k8s-diff-port-911217
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c57467d256054452b1a17d665265bdd8
	  System UUID:                c57467d2-5605-4452-b1a1-7d665265bdd8
	  Boot ID:                    a16276a0-e176-4523-9c31-de84f88a7ebc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-9qcfs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-default-k8s-diff-port-911217                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-911217             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-911217    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-d4zwd                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-default-k8s-diff-port-911217             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-mkl8l                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-911217 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node default-k8s-diff-port-911217 event: Registered Node default-k8s-diff-port-911217 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-911217 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-911217 event: Registered Node default-k8s-diff-port-911217 in Controller
	
	
	==> dmesg <==
	[Jul23 15:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055514] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048807] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.925004] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.919129] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581648] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul23 15:21] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.054667] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064954] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.208133] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.129051] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.302029] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.259904] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.058985] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.143250] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.583923] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.975582] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +5.537445] kauditd_printk_skb: 78 callbacks suppressed
	[ +23.420899] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] <==
	{"level":"warn","ts":"2024-07-23T15:41:38.077639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.292669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:41:38.079233Z","caller":"traceutil/trace.go:171","msg":"trace[1347214683] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1544; }","duration":"127.971665ms","start":"2024-07-23T15:41:37.951244Z","end":"2024-07-23T15:41:38.079216Z","steps":["trace[1347214683] 'agreement among raft nodes before linearized reading'  (duration: 126.290653ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:42:04.406071Z","caller":"traceutil/trace.go:171","msg":"trace[919381395] transaction","detail":"{read_only:false; response_revision:1565; number_of_response:1; }","duration":"157.976012ms","start":"2024-07-23T15:42:04.248079Z","end":"2024-07-23T15:42:04.406055Z","steps":["trace[919381395] 'process raft request'  (duration: 157.862949ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:42:05.215313Z","caller":"traceutil/trace.go:171","msg":"trace[1914032271] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"130.233714ms","start":"2024-07-23T15:42:05.085064Z","end":"2024-07-23T15:42:05.215297Z","steps":["trace[1914032271] 'process raft request'  (duration: 64.660456ms)","trace[1914032271] 'compare'  (duration: 65.495832ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T15:42:29.714308Z","caller":"traceutil/trace.go:171","msg":"trace[2096601190] linearizableReadLoop","detail":"{readStateIndex:1880; appliedIndex:1879; }","duration":"320.553353ms","start":"2024-07-23T15:42:29.393739Z","end":"2024-07-23T15:42:29.714292Z","steps":["trace[2096601190] 'read index received'  (duration: 320.399831ms)","trace[2096601190] 'applied index is now lower than readState.Index'  (duration: 152.828µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T15:42:29.714544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.784679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:42:29.714597Z","caller":"traceutil/trace.go:171","msg":"trace[333420039] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1586; }","duration":"320.884634ms","start":"2024-07-23T15:42:29.393702Z","end":"2024-07-23T15:42:29.714586Z","steps":["trace[333420039] 'agreement among raft nodes before linearized reading'  (duration: 320.79863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:42:29.714677Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T15:42:29.393678Z","time spent":"320.981505ms","remote":"127.0.0.1:54114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-23T15:42:29.714591Z","caller":"traceutil/trace.go:171","msg":"trace[1297126855] transaction","detail":"{read_only:false; response_revision:1586; number_of_response:1; }","duration":"466.671931ms","start":"2024-07-23T15:42:29.247899Z","end":"2024-07-23T15:42:29.714571Z","steps":["trace[1297126855] 'process raft request'  (duration: 466.278894ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:42:29.715859Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T15:42:29.247882Z","time spent":"467.773143ms","remote":"127.0.0.1:54394","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":692,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-h2ajtta5am2ojab6o363ndug3u\" mod_revision:1578 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-h2ajtta5am2ojab6o363ndug3u\" value_size:619 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-h2ajtta5am2ojab6o363ndug3u\" > >"}
	{"level":"warn","ts":"2024-07-23T15:42:30.155484Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.225728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:42:30.15562Z","caller":"traceutil/trace.go:171","msg":"trace[1949557589] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1586; }","duration":"206.38581ms","start":"2024-07-23T15:42:29.949213Z","end":"2024-07-23T15:42:30.155599Z","steps":["trace[1949557589] 'range keys from in-memory index tree'  (duration: 206.170432ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:42:30.16199Z","caller":"traceutil/trace.go:171","msg":"trace[439200416] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"115.888152ms","start":"2024-07-23T15:42:30.04609Z","end":"2024-07-23T15:42:30.161978Z","steps":["trace[439200416] 'process raft request'  (duration: 115.775441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:42:30.784159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.476983ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1087778602539367576 > lease_revoke:<id:0f1890e02ea13c52>","response":"size:29"}
	{"level":"info","ts":"2024-07-23T15:42:30.78425Z","caller":"traceutil/trace.go:171","msg":"trace[1745170563] linearizableReadLoop","detail":"{readStateIndex:1882; appliedIndex:1881; }","duration":"390.200692ms","start":"2024-07-23T15:42:30.394035Z","end":"2024-07-23T15:42:30.784236Z","steps":["trace[1745170563] 'read index received'  (duration: 47.397966ms)","trace[1745170563] 'applied index is now lower than readState.Index'  (duration: 342.801488ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T15:42:30.784349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.307723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:42:30.784384Z","caller":"traceutil/trace.go:171","msg":"trace[573896634] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1587; }","duration":"390.371837ms","start":"2024-07-23T15:42:30.394004Z","end":"2024-07-23T15:42:30.784376Z","steps":["trace[573896634] 'agreement among raft nodes before linearized reading'  (duration: 390.313255ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:42:30.784416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T15:42:30.39399Z","time spent":"390.418562ms","remote":"127.0.0.1:54114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-23T15:42:30.784497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.814438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2024-07-23T15:42:30.784581Z","caller":"traceutil/trace.go:171","msg":"trace[792241077] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1587; }","duration":"256.923911ms","start":"2024-07-23T15:42:30.527644Z","end":"2024-07-23T15:42:30.784568Z","steps":["trace[792241077] 'agreement among raft nodes before linearized reading'  (duration: 256.70122ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:42:30.963556Z","caller":"traceutil/trace.go:171","msg":"trace[1466013892] transaction","detail":"{read_only:false; response_revision:1588; number_of_response:1; }","duration":"174.589484ms","start":"2024-07-23T15:42:30.788948Z","end":"2024-07-23T15:42:30.963537Z","steps":["trace[1466013892] 'process raft request'  (duration: 174.275009ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:42:47.200456Z","caller":"traceutil/trace.go:171","msg":"trace[1037270955] transaction","detail":"{read_only:false; response_revision:1601; number_of_response:1; }","duration":"159.40233ms","start":"2024-07-23T15:42:47.040653Z","end":"2024-07-23T15:42:47.200055Z","steps":["trace[1037270955] 'process raft request'  (duration: 159.276454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:43:10.541619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.699054ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1087778602539367775 > lease_revoke:<id:0f1890e02ea13d14>","response":"size:29"}
	{"level":"warn","ts":"2024-07-23T15:43:45.299421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.783507ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1087778602539367947 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.64\" mod_revision:1640 > success:<request_put:<key:\"/registry/masterleases/192.168.61.64\" value_size:67 lease:1087778602539367945 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.64\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-23T15:43:45.299589Z","caller":"traceutil/trace.go:171","msg":"trace[1922808784] transaction","detail":"{read_only:false; response_revision:1648; number_of_response:1; }","duration":"191.777931ms","start":"2024-07-23T15:43:45.107766Z","end":"2024-07-23T15:43:45.299544Z","steps":["trace[1922808784] 'process raft request'  (duration: 67.674149ms)","trace[1922808784] 'compare'  (duration: 123.625843ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:43:48 up 23 min,  0 users,  load average: 1.32, 0.42, 0.19
	Linux default-k8s-diff-port-911217 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] <==
	I0723 15:37:14.854267       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:39:14.853013       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:39:14.853293       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:39:14.853327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:39:14.854438       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:39:14.854590       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:39:14.854667       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:41:13.856750       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:41:13.856917       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0723 15:41:14.857347       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:41:14.857405       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:41:14.857418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:41:14.857454       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:41:14.857507       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:41:14.858623       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:42:14.858216       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:42:14.858609       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0723 15:42:14.858664       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:42:14.858971       1 handler_proxy.go:93] no RequestInfo found in the context
	E0723 15:42:14.859054       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 15:42:14.860507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] <==
	I0723 15:37:57.986030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:38:27.491448       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:38:27.993228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:38:57.496089       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:38:58.001726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:39:27.502207       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:39:28.009246       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:39:57.507362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:39:58.017138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:40:27.512063       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:40:28.026100       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:40:57.518399       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:40:58.034744       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:41:27.522756       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:41:28.042219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:41:57.530524       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:41:58.053247       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:42:27.537230       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:42:28.080526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:42:37.672899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="377.322µs"
	I0723 15:42:49.669600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="115.584µs"
	E0723 15:42:57.544579       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:42:58.090605       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:43:27.549247       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0723 15:43:28.098448       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] <==
	I0723 15:21:15.173901       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:21:15.182637       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.64"]
	I0723 15:21:15.235902       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0723 15:21:15.235939       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 15:21:15.235969       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:21:15.241153       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:21:15.243986       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:21:15.244158       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:15.247867       1 config.go:192] "Starting service config controller"
	I0723 15:21:15.247929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:21:15.247987       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:21:15.248015       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:21:15.248758       1 config.go:319] "Starting node config controller"
	I0723 15:21:15.250088       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:21:15.349138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:21:15.349211       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:21:15.350922       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] <==
	I0723 15:21:11.281031       1 serving.go:380] Generated self-signed cert in-memory
	W0723 15:21:13.815272       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 15:21:13.815400       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 15:21:13.815441       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 15:21:13.815477       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 15:21:13.837439       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 15:21:13.838843       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:13.840623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 15:21:13.844554       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 15:21:13.844616       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 15:21:13.847082       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:21:13.949598       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 15:41:38 default-k8s-diff-port-911217 kubelet[935]: E0723 15:41:38.653491     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:41:50 default-k8s-diff-port-911217 kubelet[935]: E0723 15:41:50.652891     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:42:02 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:02.653236     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:42:09 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:09.671935     935 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:42:09 default-k8s-diff-port-911217 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:42:09 default-k8s-diff-port-911217 kubelet[935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:42:09 default-k8s-diff-port-911217 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:42:09 default-k8s-diff-port-911217 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:42:13 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:13.653562     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:42:26 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:26.669944     935 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 23 15:42:26 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:26.670034     935 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 23 15:42:26 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:26.670258     935 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scp4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-mkl8l_kube-system(9e129e04-b1b8-47e8-9c07-20cdc89705e4): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 23 15:42:26 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:26.670303     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:42:37 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:37.654261     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:42:49 default-k8s-diff-port-911217 kubelet[935]: E0723 15:42:49.653202     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:43:02 default-k8s-diff-port-911217 kubelet[935]: E0723 15:43:02.652683     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:43:09 default-k8s-diff-port-911217 kubelet[935]: E0723 15:43:09.685526     935 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:43:09 default-k8s-diff-port-911217 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:43:09 default-k8s-diff-port-911217 kubelet[935]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:43:09 default-k8s-diff-port-911217 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:43:09 default-k8s-diff-port-911217 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:43:13 default-k8s-diff-port-911217 kubelet[935]: E0723 15:43:13.655350     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:43:24 default-k8s-diff-port-911217 kubelet[935]: E0723 15:43:24.653136     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:43:35 default-k8s-diff-port-911217 kubelet[935]: E0723 15:43:35.653586     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	Jul 23 15:43:46 default-k8s-diff-port-911217 kubelet[935]: E0723 15:43:46.654488     935 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mkl8l" podUID="9e129e04-b1b8-47e8-9c07-20cdc89705e4"
	
	
	==> storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] <==
	I0723 15:21:15.120730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0723 15:21:45.125492       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] <==
	I0723 15:21:45.948925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 15:21:45.960155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 15:21:45.960310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 15:21:45.972149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 15:21:45.972756       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdbe9b7-9c70-4aaf-9bed-7816d87777fa", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-911217_b01d1de4-13f2-47ea-a9a9-a1c2c8db6efc became leader
	I0723 15:21:45.972969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-911217_b01d1de4-13f2-47ea-a9a9-a1c2c8db6efc!
	I0723 15:21:46.073617       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-911217_b01d1de4-13f2-47ea-a9a9-a1c2c8db6efc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mkl8l
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 describe pod metrics-server-569cc877fc-mkl8l
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-911217 describe pod metrics-server-569cc877fc-mkl8l: exit status 1 (88.266246ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mkl8l" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-911217 describe pod metrics-server-569cc877fc-mkl8l: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (355.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-543029 -n no-preload-543029
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-23 15:41:06.014721543 +0000 UTC m=+6275.220466279
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-543029 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-543029 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.862µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-543029 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-543029 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-543029 logs -n 25: (1.198209276s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272        | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-518198 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | disable-driver-mounts-518198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:39 UTC | 23 Jul 24 15:39 UTC |
	| start   | -p newest-cni-459494 --memory=2200 --alsologtostderr   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:39 UTC | 23 Jul 24 15:40 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-459494             | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC | 23 Jul 24 15:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-459494                                   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC | 23 Jul 24 15:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-459494                  | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC | 23 Jul 24 15:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-459494 --memory=2200 --alsologtostderr   | newest-cni-459494            | jenkins | v1.33.1 | 23 Jul 24 15:40 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:40:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:40:32.606402   72884 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:40:32.606637   72884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:40:32.606644   72884 out.go:304] Setting ErrFile to fd 2...
	I0723 15:40:32.606648   72884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:40:32.606833   72884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:40:32.607339   72884 out.go:298] Setting JSON to false
	I0723 15:40:32.608210   72884 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8579,"bootTime":1721740654,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:40:32.608260   72884 start.go:139] virtualization: kvm guest
	I0723 15:40:32.610708   72884 out.go:177] * [newest-cni-459494] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:40:32.612162   72884 notify.go:220] Checking for updates...
	I0723 15:40:32.612175   72884 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:40:32.613859   72884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:40:32.615266   72884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:40:32.616514   72884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:40:32.617718   72884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:40:32.618879   72884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:40:32.620623   72884 config.go:182] Loaded profile config "newest-cni-459494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:40:32.621220   72884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:40:32.621288   72884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:40:32.636639   72884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0723 15:40:32.637059   72884 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:40:32.637639   72884 main.go:141] libmachine: Using API Version  1
	I0723 15:40:32.637664   72884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:40:32.637956   72884 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:40:32.638129   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:32.638427   72884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:40:32.638706   72884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:40:32.638739   72884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:40:32.653496   72884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0723 15:40:32.653916   72884 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:40:32.654524   72884 main.go:141] libmachine: Using API Version  1
	I0723 15:40:32.654548   72884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:40:32.654818   72884 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:40:32.655019   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:32.691374   72884 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:40:32.692543   72884 start.go:297] selected driver: kvm2
	I0723 15:40:32.692559   72884 start.go:901] validating driver "kvm2" against &{Name:newest-cni-459494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-459494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.147 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:40:32.692701   72884 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:40:32.693345   72884 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:40:32.693417   72884 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:40:32.708158   72884 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:40:32.708629   72884 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0723 15:40:32.708707   72884 cni.go:84] Creating CNI manager for ""
	I0723 15:40:32.708725   72884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:40:32.708772   72884 start.go:340] cluster config:
	{Name:newest-cni-459494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-459494 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.147 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:40:32.708908   72884 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:40:32.710981   72884 out.go:177] * Starting "newest-cni-459494" primary control-plane node in "newest-cni-459494" cluster
	I0723 15:40:32.712261   72884 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:40:32.712298   72884 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0723 15:40:32.712306   72884 cache.go:56] Caching tarball of preloaded images
	I0723 15:40:32.712376   72884 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:40:32.712387   72884 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0723 15:40:32.712487   72884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/config.json ...
	I0723 15:40:32.712661   72884 start.go:360] acquireMachinesLock for newest-cni-459494: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:40:32.712705   72884 start.go:364] duration metric: took 27.228µs to acquireMachinesLock for "newest-cni-459494"
	I0723 15:40:32.712719   72884 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:40:32.712725   72884 fix.go:54] fixHost starting: 
	I0723 15:40:32.712993   72884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:40:32.713023   72884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:40:32.728418   72884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34861
	I0723 15:40:32.728845   72884 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:40:32.729324   72884 main.go:141] libmachine: Using API Version  1
	I0723 15:40:32.729348   72884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:40:32.729849   72884 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:40:32.730075   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:32.730245   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetState
	I0723 15:40:32.731975   72884 fix.go:112] recreateIfNeeded on newest-cni-459494: state=Stopped err=<nil>
	I0723 15:40:32.732001   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	W0723 15:40:32.732324   72884 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:40:32.734441   72884 out.go:177] * Restarting existing kvm2 VM for "newest-cni-459494" ...
	I0723 15:40:32.735809   72884 main.go:141] libmachine: (newest-cni-459494) Calling .Start
	I0723 15:40:32.735988   72884 main.go:141] libmachine: (newest-cni-459494) Ensuring networks are active...
	I0723 15:40:32.736700   72884 main.go:141] libmachine: (newest-cni-459494) Ensuring network default is active
	I0723 15:40:32.736956   72884 main.go:141] libmachine: (newest-cni-459494) Ensuring network mk-newest-cni-459494 is active
	I0723 15:40:32.737290   72884 main.go:141] libmachine: (newest-cni-459494) Getting domain xml...
	I0723 15:40:32.738054   72884 main.go:141] libmachine: (newest-cni-459494) Creating domain...
	I0723 15:40:33.979533   72884 main.go:141] libmachine: (newest-cni-459494) Waiting to get IP...
	I0723 15:40:33.980419   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:33.980822   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:33.980854   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:33.980778   72919 retry.go:31] will retry after 219.525775ms: waiting for machine to come up
	I0723 15:40:34.202445   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:34.202970   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:34.203000   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:34.202915   72919 retry.go:31] will retry after 273.860763ms: waiting for machine to come up
	I0723 15:40:34.478493   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:34.478986   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:34.479013   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:34.478942   72919 retry.go:31] will retry after 382.332821ms: waiting for machine to come up
	I0723 15:40:34.863407   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:34.863950   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:34.863988   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:34.863898   72919 retry.go:31] will retry after 509.697306ms: waiting for machine to come up
	I0723 15:40:35.375263   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:35.375791   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:35.375819   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:35.375742   72919 retry.go:31] will retry after 659.083225ms: waiting for machine to come up
	I0723 15:40:36.035978   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:36.036452   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:36.036481   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:36.036398   72919 retry.go:31] will retry after 698.42536ms: waiting for machine to come up
	I0723 15:40:36.736121   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:36.736767   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:36.736795   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:36.736730   72919 retry.go:31] will retry after 792.754443ms: waiting for machine to come up
	I0723 15:40:37.530649   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:37.531211   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:37.531240   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:37.531156   72919 retry.go:31] will retry after 1.393558853s: waiting for machine to come up
	I0723 15:40:38.926137   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:38.926638   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:38.926693   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:38.926629   72919 retry.go:31] will retry after 1.699935172s: waiting for machine to come up
	I0723 15:40:40.628460   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:40.628941   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:40.628990   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:40.628896   72919 retry.go:31] will retry after 1.786167322s: waiting for machine to come up
	I0723 15:40:42.416731   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:42.417325   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:42.417356   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:42.417273   72919 retry.go:31] will retry after 2.354949974s: waiting for machine to come up
	I0723 15:40:44.773588   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:44.774026   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:44.774069   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:44.773987   72919 retry.go:31] will retry after 3.618668036s: waiting for machine to come up
	I0723 15:40:48.394168   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:48.394663   72884 main.go:141] libmachine: (newest-cni-459494) DBG | unable to find current IP address of domain newest-cni-459494 in network mk-newest-cni-459494
	I0723 15:40:48.394693   72884 main.go:141] libmachine: (newest-cni-459494) DBG | I0723 15:40:48.394610   72919 retry.go:31] will retry after 4.296293096s: waiting for machine to come up
	I0723 15:40:52.695740   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.696255   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has current primary IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.696276   72884 main.go:141] libmachine: (newest-cni-459494) Found IP for machine: 192.168.50.147
	I0723 15:40:52.696287   72884 main.go:141] libmachine: (newest-cni-459494) Reserving static IP address...
	I0723 15:40:52.696819   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "newest-cni-459494", mac: "52:54:00:9c:e9:00", ip: "192.168.50.147"} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:52.696874   72884 main.go:141] libmachine: (newest-cni-459494) DBG | skip adding static IP to network mk-newest-cni-459494 - found existing host DHCP lease matching {name: "newest-cni-459494", mac: "52:54:00:9c:e9:00", ip: "192.168.50.147"}
	I0723 15:40:52.696886   72884 main.go:141] libmachine: (newest-cni-459494) Reserved static IP address: 192.168.50.147
	I0723 15:40:52.696899   72884 main.go:141] libmachine: (newest-cni-459494) Waiting for SSH to be available...
	I0723 15:40:52.696913   72884 main.go:141] libmachine: (newest-cni-459494) DBG | Getting to WaitForSSH function...
	I0723 15:40:52.699406   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.699760   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:52.699789   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.699938   72884 main.go:141] libmachine: (newest-cni-459494) DBG | Using SSH client type: external
	I0723 15:40:52.699967   72884 main.go:141] libmachine: (newest-cni-459494) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/newest-cni-459494/id_rsa (-rw-------)
	I0723 15:40:52.699997   72884 main.go:141] libmachine: (newest-cni-459494) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/newest-cni-459494/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:40:52.700023   72884 main.go:141] libmachine: (newest-cni-459494) DBG | About to run SSH command:
	I0723 15:40:52.700042   72884 main.go:141] libmachine: (newest-cni-459494) DBG | exit 0
	I0723 15:40:52.826305   72884 main.go:141] libmachine: (newest-cni-459494) DBG | SSH cmd err, output: <nil>: 
	I0723 15:40:52.826700   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetConfigRaw
	I0723 15:40:52.827327   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetIP
	I0723 15:40:52.830398   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.830861   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:52.830882   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.831155   72884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/config.json ...
	I0723 15:40:52.831370   72884 machine.go:94] provisionDockerMachine start ...
	I0723 15:40:52.831396   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:52.831672   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:52.834405   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.834843   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:52.834870   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.835078   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:52.835326   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:52.835487   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:52.835628   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:52.835799   72884 main.go:141] libmachine: Using SSH client type: native
	I0723 15:40:52.836030   72884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0723 15:40:52.836045   72884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:40:52.942893   72884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:40:52.942917   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetMachineName
	I0723 15:40:52.943152   72884 buildroot.go:166] provisioning hostname "newest-cni-459494"
	I0723 15:40:52.943175   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetMachineName
	I0723 15:40:52.943350   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:52.946496   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.946881   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:52.946921   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:52.947064   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:52.947238   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:52.947441   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:52.947690   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:52.947876   72884 main.go:141] libmachine: Using SSH client type: native
	I0723 15:40:52.948030   72884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0723 15:40:52.948042   72884 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-459494 && echo "newest-cni-459494" | sudo tee /etc/hostname
	I0723 15:40:53.067785   72884 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-459494
	
	I0723 15:40:53.067817   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:53.071227   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.071620   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:53.071649   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.071831   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:53.072012   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.072181   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.072351   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:53.072523   72884 main.go:141] libmachine: Using SSH client type: native
	I0723 15:40:53.072728   72884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0723 15:40:53.072747   72884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-459494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-459494/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-459494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:40:53.186926   72884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:40:53.186953   72884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:40:53.186980   72884 buildroot.go:174] setting up certificates
	I0723 15:40:53.186988   72884 provision.go:84] configureAuth start
	I0723 15:40:53.186996   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetMachineName
	I0723 15:40:53.187277   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetIP
	I0723 15:40:53.189760   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.190113   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:53.190134   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.190332   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:53.192483   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.192834   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:53.192861   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.192989   72884 provision.go:143] copyHostCerts
	I0723 15:40:53.193067   72884 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:40:53.193081   72884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:40:53.193161   72884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:40:53.193257   72884 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:40:53.193266   72884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:40:53.193305   72884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:40:53.193356   72884 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:40:53.193363   72884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:40:53.193383   72884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:40:53.193425   72884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.newest-cni-459494 san=[127.0.0.1 192.168.50.147 localhost minikube newest-cni-459494]
	I0723 15:40:53.331195   72884 provision.go:177] copyRemoteCerts
	I0723 15:40:53.331283   72884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:40:53.331315   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:53.334100   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.334441   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:53.334479   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.334694   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:53.334889   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.335136   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:53.335323   72884 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/newest-cni-459494/id_rsa Username:docker}
	I0723 15:40:53.422016   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:40:53.445734   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 15:40:53.468709   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:40:53.491797   72884 provision.go:87] duration metric: took 304.796256ms to configureAuth
	I0723 15:40:53.491823   72884 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:40:53.492003   72884 config.go:182] Loaded profile config "newest-cni-459494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:40:53.492084   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:53.495171   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.495582   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:53.495621   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.495775   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:53.495972   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.496127   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.496281   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:53.496454   72884 main.go:141] libmachine: Using SSH client type: native
	I0723 15:40:53.496638   72884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0723 15:40:53.496663   72884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:40:53.761098   72884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:40:53.761126   72884 machine.go:97] duration metric: took 929.738379ms to provisionDockerMachine
	I0723 15:40:53.761139   72884 start.go:293] postStartSetup for "newest-cni-459494" (driver="kvm2")
	I0723 15:40:53.761151   72884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:40:53.761180   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:53.761535   72884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:40:53.761568   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:53.764541   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.764969   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:53.764993   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.765214   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:53.765398   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.765596   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:53.765789   72884 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/newest-cni-459494/id_rsa Username:docker}
	I0723 15:40:53.856894   72884 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:40:53.860936   72884 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:40:53.860967   72884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:40:53.861033   72884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:40:53.861104   72884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:40:53.861199   72884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:40:53.870734   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:40:53.893890   72884 start.go:296] duration metric: took 132.73788ms for postStartSetup
	I0723 15:40:53.893935   72884 fix.go:56] duration metric: took 21.181207878s for fixHost
	I0723 15:40:53.893959   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:53.896352   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.896790   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:53.896820   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:53.897016   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:53.897238   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.897532   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:53.897721   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:53.897912   72884 main.go:141] libmachine: Using SSH client type: native
	I0723 15:40:53.898163   72884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0723 15:40:53.898180   72884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:40:54.010957   72884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721749253.989341398
	
	I0723 15:40:54.010981   72884 fix.go:216] guest clock: 1721749253.989341398
	I0723 15:40:54.010996   72884 fix.go:229] Guest: 2024-07-23 15:40:53.989341398 +0000 UTC Remote: 2024-07-23 15:40:53.893939516 +0000 UTC m=+21.322831808 (delta=95.401882ms)
	I0723 15:40:54.011026   72884 fix.go:200] guest clock delta is within tolerance: 95.401882ms
	I0723 15:40:54.011037   72884 start.go:83] releasing machines lock for "newest-cni-459494", held for 21.298322394s
	I0723 15:40:54.011062   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:54.011341   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetIP
	I0723 15:40:54.013816   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:54.014177   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:54.014205   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:54.014313   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:54.014894   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:54.015072   72884 main.go:141] libmachine: (newest-cni-459494) Calling .DriverName
	I0723 15:40:54.015187   72884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:40:54.015231   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:54.015293   72884 ssh_runner.go:195] Run: cat /version.json
	I0723 15:40:54.015316   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHHostname
	I0723 15:40:54.018070   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:54.018423   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:54.018452   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:54.018468   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:54.018624   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:54.018829   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:54.018860   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:54.018882   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:54.019002   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:54.019070   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHPort
	I0723 15:40:54.019154   72884 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/newest-cni-459494/id_rsa Username:docker}
	I0723 15:40:54.019272   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHKeyPath
	I0723 15:40:54.019424   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetSSHUsername
	I0723 15:40:54.019587   72884 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/newest-cni-459494/id_rsa Username:docker}
	I0723 15:40:54.103408   72884 ssh_runner.go:195] Run: systemctl --version
	I0723 15:40:54.140052   72884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:40:54.283219   72884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:40:54.289419   72884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:40:54.289484   72884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:40:54.305173   72884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:40:54.305196   72884 start.go:495] detecting cgroup driver to use...
	I0723 15:40:54.305261   72884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:40:54.323419   72884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:40:54.339080   72884 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:40:54.339139   72884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:40:54.353676   72884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:40:54.368493   72884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:40:54.492429   72884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:40:54.647932   72884 docker.go:233] disabling docker service ...
	I0723 15:40:54.648025   72884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:40:54.662148   72884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:40:54.675647   72884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:40:54.798884   72884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:40:54.918030   72884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:40:54.933019   72884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:40:54.952738   72884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:40:54.952812   72884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:40:54.963483   72884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:40:54.963561   72884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:40:54.974452   72884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:40:54.985108   72884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:40:54.995094   72884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:40:55.006346   72884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:40:55.016584   72884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:40:55.033824   72884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:40:55.044381   72884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:40:55.054673   72884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:40:55.054729   72884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:40:55.067914   72884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:40:55.077635   72884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:40:55.199150   72884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:40:55.334036   72884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:40:55.334116   72884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:40:55.339594   72884 start.go:563] Will wait 60s for crictl version
	I0723 15:40:55.339660   72884 ssh_runner.go:195] Run: which crictl
	I0723 15:40:55.343225   72884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:40:55.380095   72884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:40:55.380197   72884 ssh_runner.go:195] Run: crio --version
	I0723 15:40:55.408532   72884 ssh_runner.go:195] Run: crio --version
	I0723 15:40:55.439218   72884 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:40:55.440429   72884 main.go:141] libmachine: (newest-cni-459494) Calling .GetIP
	I0723 15:40:55.443289   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:55.443705   72884 main.go:141] libmachine: (newest-cni-459494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e9:00", ip: ""} in network mk-newest-cni-459494: {Iface:virbr4 ExpiryTime:2024-07-23 16:40:43 +0000 UTC Type:0 Mac:52:54:00:9c:e9:00 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:newest-cni-459494 Clientid:01:52:54:00:9c:e9:00}
	I0723 15:40:55.443732   72884 main.go:141] libmachine: (newest-cni-459494) DBG | domain newest-cni-459494 has defined IP address 192.168.50.147 and MAC address 52:54:00:9c:e9:00 in network mk-newest-cni-459494
	I0723 15:40:55.443972   72884 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:40:55.448112   72884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:40:55.461330   72884 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0723 15:40:55.462600   72884 kubeadm.go:883] updating cluster {Name:newest-cni-459494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-459494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.147 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:40:55.462702   72884 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:40:55.462748   72884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:40:55.498919   72884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0723 15:40:55.498975   72884 ssh_runner.go:195] Run: which lz4
	I0723 15:40:55.502729   72884 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:40:55.506675   72884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:40:55.506707   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0723 15:40:56.668829   72884 crio.go:462] duration metric: took 1.166124084s to copy over tarball
	I0723 15:40:56.668905   72884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:40:58.718839   72884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.049904247s)
	I0723 15:40:58.718871   72884 crio.go:469] duration metric: took 2.050014456s to extract the tarball
	I0723 15:40:58.718879   72884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:40:58.754879   72884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:40:58.795147   72884 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:40:58.795213   72884 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:40:58.795249   72884 kubeadm.go:934] updating node { 192.168.50.147 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:40:58.795496   72884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-459494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-459494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:40:58.795951   72884 ssh_runner.go:195] Run: crio config
	I0723 15:40:58.844946   72884 cni.go:84] Creating CNI manager for ""
	I0723 15:40:58.844971   72884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:40:58.844984   72884 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0723 15:40:58.845012   72884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.147 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-459494 NodeName:newest-cni-459494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.50.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:40:58.845175   72884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-459494"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:40:58.845249   72884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:40:58.854996   72884 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:40:58.855071   72884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:40:58.864248   72884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0723 15:40:58.879705   72884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:40:58.894866   72884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0723 15:40:58.911159   72884 ssh_runner.go:195] Run: grep 192.168.50.147	control-plane.minikube.internal$ /etc/hosts
	I0723 15:40:58.915038   72884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:40:58.927968   72884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:40:59.052526   72884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:40:59.069128   72884 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494 for IP: 192.168.50.147
	I0723 15:40:59.069186   72884 certs.go:194] generating shared ca certs ...
	I0723 15:40:59.069210   72884 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:40:59.069386   72884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:40:59.069464   72884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:40:59.069478   72884 certs.go:256] generating profile certs ...
	I0723 15:40:59.069616   72884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/client.key
	I0723 15:40:59.069705   72884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/apiserver.key.2967c677
	I0723 15:40:59.069763   72884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/proxy-client.key
	I0723 15:40:59.069902   72884 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:40:59.069961   72884 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:40:59.069975   72884 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:40:59.070008   72884 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:40:59.070037   72884 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:40:59.070069   72884 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:40:59.070146   72884 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:40:59.070986   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:40:59.110534   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:40:59.137112   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:40:59.171728   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:40:59.197430   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 15:40:59.221223   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:40:59.249572   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:40:59.276943   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/newest-cni-459494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:40:59.300470   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:40:59.323627   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:40:59.346536   72884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:40:59.370047   72884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:40:59.387402   72884 ssh_runner.go:195] Run: openssl version
	I0723 15:40:59.393067   72884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:40:59.404643   72884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:40:59.408771   72884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:40:59.408821   72884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:40:59.414289   72884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:40:59.424520   72884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:40:59.435406   72884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:40:59.440120   72884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:40:59.440167   72884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:40:59.446968   72884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:40:59.458474   72884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:40:59.470525   72884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:40:59.474980   72884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:40:59.475031   72884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:40:59.480603   72884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:40:59.491268   72884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:40:59.495693   72884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:40:59.501366   72884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:40:59.506778   72884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:40:59.512227   72884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:40:59.517554   72884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:40:59.522885   72884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:40:59.528150   72884 kubeadm.go:392] StartCluster: {Name:newest-cni-459494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-459494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.147 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:40:59.528232   72884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:40:59.528282   72884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:40:59.568716   72884 cri.go:89] found id: ""
	I0723 15:40:59.568778   72884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:40:59.580043   72884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:40:59.580069   72884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:40:59.580116   72884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:40:59.589692   72884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:40:59.590675   72884 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-459494" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:40:59.591252   72884 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-459494" cluster setting kubeconfig missing "newest-cni-459494" context setting]
	I0723 15:40:59.592167   72884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:40:59.593686   72884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:40:59.603473   72884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.147
	I0723 15:40:59.603513   72884 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:40:59.603529   72884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:40:59.603571   72884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:40:59.644181   72884 cri.go:89] found id: ""
	I0723 15:40:59.644252   72884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:40:59.660785   72884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:40:59.670104   72884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:40:59.670128   72884 kubeadm.go:157] found existing configuration files:
	
	I0723 15:40:59.670179   72884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:40:59.679186   72884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:40:59.679250   72884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:40:59.689105   72884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:40:59.698203   72884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:40:59.698258   72884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:40:59.708100   72884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:40:59.716754   72884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:40:59.716810   72884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:40:59.725972   72884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:40:59.735218   72884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:40:59.735276   72884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:40:59.744076   72884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:40:59.752885   72884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:40:59.873055   72884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:41:00.392063   72884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:41:00.602250   72884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:41:00.678028   72884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:41:00.775377   72884 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:41:00.775495   72884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:41:01.275992   72884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:41:01.776104   72884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:41:02.276445   72884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.631545118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f8b4f55-2839-42cf-bc63-0ac1d70f0282 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.638960772Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c64a3be1-a2ae-47e1-984f-601ecea732f4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.639281801Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&PodSandboxMetadata{Name:busybox,Uid:806aa06c-55ed-4855-a400-2cf44deea87b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748111769092356,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:43.551718169Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-v2bhl,Uid:795d8c55-65e3-46c6-9b06-71f89ff17310,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17217481114478634
41,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:43.551714666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:934a8f5b492a2e509b606bf46f066948a8357e0ed9505ebed8f46e0af55eab90,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-dsfmg,Uid:98637dfb-5600-4b7d-9272-ac5c5172d67b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748109648124128,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-dsfmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98637dfb-5600-4b7d-9272-ac5c5172d67b,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:43.5
51712402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748103891129883,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-23T15:21:43.551713599Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&PodSandboxMetadata{Name:kube-proxy-wzbps,Uid:daefb252-a4db-4952-88fe-1e8e082a7625,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748103864079819,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a7625,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-07-23T15:21:43.551705195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-543029,Uid:75d6ebd1070a86365328da7acb5078db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099073893330,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.227:2379,kubernetes.io/config.hash: 75d6ebd1070a86365328da7acb5078db,kubernetes.io/config.seen: 2024-07-23T15:21:38.599671223Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-543029,
Uid:197b776f1fd2dda260ca13c047c74311,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099063638216,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.227:8443,kubernetes.io/config.hash: 197b776f1fd2dda260ca13c047c74311,kubernetes.io/config.seen: 2024-07-23T15:21:38.558009677Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-543029,Uid:ccf688a759b9926ac7c4b3d6ad9c3dfe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099062567496,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ccf688a759b9926ac7c4b3d6ad9c3dfe,kubernetes.io/config.seen: 2024-07-23T15:21:38.558015276Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-543029,Uid:00361442c0dcd67948776b99792e6298,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099056695934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 00361442c0dcd67948776b99792e6298,ku
bernetes.io/config.seen: 2024-07-23T15:21:38.558014031Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c64a3be1-a2ae-47e1-984f-601ecea732f4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.639997868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=caf7fcb9-421f-4dcc-8f70-7aacff15cf2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.640052123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=caf7fcb9-421f-4dcc-8f70-7aacff15cf2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.640864069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88
fe-1e8e082a7625,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]st
ring{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubern
etes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=caf7fcb9-421f-4dcc-8f70-7aacff15cf2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.671767952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fae647c-60e8-46d2-a28d-ecfeac8e234d name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.671848933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fae647c-60e8-46d2-a28d-ecfeac8e234d name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.673314054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82db4330-850f-4083-bccf-93d1f0c39a3f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.673700564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749266673674940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82db4330-850f-4083-bccf-93d1f0c39a3f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.675009367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=133e1b14-23db-4d02-ba22-43e28395e046 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.675122056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=133e1b14-23db-4d02-ba22-43e28395e046 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.675415332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=133e1b14-23db-4d02-ba22-43e28395e046 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.713511241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=916494a1-ee3e-4de6-b8d3-4d3bcdb4152c name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.713634956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=916494a1-ee3e-4de6-b8d3-4d3bcdb4152c name=/runtime.v1.RuntimeService/Version
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.714873926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c40f3231-696f-43ad-9a7e-0839db1fe902 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.715400769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749266715376118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c40f3231-696f-43ad-9a7e-0839db1fe902 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.715989280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67370172-7b34-40dc-89cb-2245ea92beae name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.716056316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67370172-7b34-40dc-89cb-2245ea92beae name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.716299639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67370172-7b34-40dc-89cb-2245ea92beae name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.734122538Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=acda16eb-e195-4364-9b36-d719b1b244b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.734725508Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&PodSandboxMetadata{Name:busybox,Uid:806aa06c-55ed-4855-a400-2cf44deea87b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748111769092356,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:43.551718169Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-v2bhl,Uid:795d8c55-65e3-46c6-9b06-71f89ff17310,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17217481114478634
41,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:43.551714666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:934a8f5b492a2e509b606bf46f066948a8357e0ed9505ebed8f46e0af55eab90,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-dsfmg,Uid:98637dfb-5600-4b7d-9272-ac5c5172d67b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748109648124128,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-dsfmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98637dfb-5600-4b7d-9272-ac5c5172d67b,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-23T15:21:43.5
51712402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748103891129883,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-23T15:21:43.551713599Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&PodSandboxMetadata{Name:kube-proxy-wzbps,Uid:daefb252-a4db-4952-88fe-1e8e082a7625,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748103864079819,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a7625,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-07-23T15:21:43.551705195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-543029,Uid:75d6ebd1070a86365328da7acb5078db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099073893330,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.227:2379,kubernetes.io/config.hash: 75d6ebd1070a86365328da7acb5078db,kubernetes.io/config.seen: 2024-07-23T15:21:38.599671223Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-543029,
Uid:197b776f1fd2dda260ca13c047c74311,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099063638216,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.227:8443,kubernetes.io/config.hash: 197b776f1fd2dda260ca13c047c74311,kubernetes.io/config.seen: 2024-07-23T15:21:38.558009677Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-543029,Uid:ccf688a759b9926ac7c4b3d6ad9c3dfe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099062567496,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ccf688a759b9926ac7c4b3d6ad9c3dfe,kubernetes.io/config.seen: 2024-07-23T15:21:38.558015276Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-543029,Uid:00361442c0dcd67948776b99792e6298,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721748099056695934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 00361442c0dcd67948776b99792e6298,ku
bernetes.io/config.seen: 2024-07-23T15:21:38.558014031Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=acda16eb-e195-4364-9b36-d719b1b244b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.735414666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60f473d2-7071-4162-a42c-91d0c8954ec5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.735499704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60f473d2-7071-4162-a42c-91d0c8954ec5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:41:06 no-preload-543029 crio[721]: time="2024-07-23 15:41:06.735705652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721748134831359502,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e61738b0d43a90aaab00a125eca846b8c213d6fb7a698cdd2cae4a94d5f84d58,PodSandboxId:d73766a0dfc70498662f66a0c4c477eaf0221bbffdd3c8edc7e04ce4cc3ff507,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721748114694636923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 806aa06c-55ed-4855-a400-2cf44deea87b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca,PodSandboxId:b1b956731128b4013e5349cb65292fedf8746cb38f6fb1d58f013ead872b5dba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721748111629103733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-v2bhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795d8c55-65e3-46c6-9b06-71f89ff17310,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6,PodSandboxId:37c60368c52e6b3d1a2c480f12ace0e33a152d0d7b31358c8ce9d253c995791c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721748104094954912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
6cee44d-4674-4d8b-8d1b-d6a8578d5bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca,PodSandboxId:e0f26f676520346b3437e85ecebed0dd6fa9004d7b0167d58d315963e2c0e460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721748104033614983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzbps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daefb252-a4db-4952-88fe-1e8e082a76
25,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0,PodSandboxId:08b4f071b699d4e1ab260e125294c13468a13807ff3750f14bcae25132391bb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721748099331421074,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d6ebd1070a86365328da7acb5078db,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e,PodSandboxId:f882803b840a6adfea21e80de02b1285cb4dc595058004e8c9ec0720ae25c545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721748099306381576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197b776f1fd2dda260ca13c047c74311,},Annotations:map[string]string{io.kubernetes.containe
r.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d,PodSandboxId:8e7bc39b96f0ebb759ef6ace85f5fff49052b9dc2a7a8325f56cd26a41e248ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721748099271405876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00361442c0dcd67948776b99792e6298,},Annotations:map[string]string{io.kuber
netes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14,PodSandboxId:c78446156bfe86bf2c898cced7f8fbdca09210e634ee3b67d15511bf04264904,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721748099243478152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-543029,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf688a759b9926ac7c4b3d6ad9c3dfe,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60f473d2-7071-4162-a42c-91d0c8954ec5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33bc08508dd46       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   37c60368c52e6       storage-provisioner
	e61738b0d43a9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   d73766a0dfc70       busybox
	289a796ff2c74       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   b1b956731128b       coredns-5cfdc65f69-v2bhl
	2d2d4409a7d9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   37c60368c52e6       storage-provisioner
	62a5ee505542b       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      19 minutes ago      Running             kube-proxy                1                   e0f26f6765203       kube-proxy-wzbps
	e23570772b1ba       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      19 minutes ago      Running             etcd                      1                   08b4f071b699d       etcd-no-preload-543029
	64d77a0d9b5ed       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      19 minutes ago      Running             kube-apiserver            1                   f882803b840a6       kube-apiserver-no-preload-543029
	7006aba67d59f       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      19 minutes ago      Running             kube-controller-manager   1                   8e7bc39b96f0e       kube-controller-manager-no-preload-543029
	bdf775206fb2d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      19 minutes ago      Running             kube-scheduler            1                   c78446156bfe8       kube-scheduler-no-preload-543029
	
	
	==> coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32842 - 9729 "HINFO IN 1856836756006291531.7268083712499520585. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021472439s
	
	
	==> describe nodes <==
	Name:               no-preload-543029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-543029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=no-preload-543029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_12_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:12:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-543029
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:41:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:37:30 +0000   Tue, 23 Jul 2024 15:12:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:37:30 +0000   Tue, 23 Jul 2024 15:12:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:37:30 +0000   Tue, 23 Jul 2024 15:12:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:37:30 +0000   Tue, 23 Jul 2024 15:21:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.227
	  Hostname:    no-preload-543029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb6b4649da84ee099e27e146836b0c7
	  System UUID:                9eb6b464-9da8-4ee0-99e2-7e146836b0c7
	  Boot ID:                    dc32264d-9a14-4f6d-bd66-36c40076c1e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5cfdc65f69-v2bhl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-543029                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-543029             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-543029    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-wzbps                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-543029             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-78fcd8795b-dsfmg              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-543029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-543029 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-543029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-543029 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-543029 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-543029 event: Registered Node no-preload-543029 in Controller
	  Normal  CIDRAssignmentFailed     28m                cidrAllocator    Node no-preload-543029 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-543029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientMemory
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-543029 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-543029 event: Registered Node no-preload-543029 in Controller
	
	
	==> dmesg <==
	[Jul23 15:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051490] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039765] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942519] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.942725] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.604953] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.029763] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.065548] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059306] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.177026] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.113880] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.287990] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[ +14.664725] systemd-fstab-generator[1164]: Ignoring "noauto" option for root device
	[  +0.063194] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.795946] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +5.035998] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.506659] systemd-fstab-generator[1917]: Ignoring "noauto" option for root device
	[  +3.773412] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.070787] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] <==
	{"level":"info","ts":"2024-07-23T15:31:41.560356Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2585086080,"revision":860,"compact-revision":-1}
	{"level":"info","ts":"2024-07-23T15:36:41.556638Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1103}
	{"level":"info","ts":"2024-07-23T15:36:41.560357Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1103,"took":"3.444648ms","hash":285054551,"current-db-size-bytes":2830336,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1679360,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-23T15:36:41.560399Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":285054551,"revision":1103,"compact-revision":860}
	{"level":"warn","ts":"2024-07-23T15:39:59.522861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.196658ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17034461933112330508 > lease_revoke:<id:6c6690e02f1308af>","response":"size:27"}
	{"level":"info","ts":"2024-07-23T15:39:59.739134Z","caller":"traceutil/trace.go:171","msg":"trace[65997061] linearizableReadLoop","detail":"{readStateIndex:1766; appliedIndex:1765; }","duration":"106.921033ms","start":"2024-07-23T15:39:59.632174Z","end":"2024-07-23T15:39:59.739095Z","steps":["trace[65997061] 'read index received'  (duration: 106.790989ms)","trace[65997061] 'applied index is now lower than readState.Index'  (duration: 129.582µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T15:39:59.739352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.158126ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:39:59.739438Z","caller":"traceutil/trace.go:171","msg":"trace[735454074] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1508; }","duration":"107.265227ms","start":"2024-07-23T15:39:59.632154Z","end":"2024-07-23T15:39:59.739419Z","steps":["trace[735454074] 'agreement among raft nodes before linearized reading'  (duration: 107.143189ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:39:59.739359Z","caller":"traceutil/trace.go:171","msg":"trace[43996539] transaction","detail":"{read_only:false; response_revision:1508; number_of_response:1; }","duration":"185.060511ms","start":"2024-07-23T15:39:59.554282Z","end":"2024-07-23T15:39:59.739342Z","steps":["trace[43996539] 'process raft request'  (duration: 184.712553ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:40:13.653599Z","caller":"traceutil/trace.go:171","msg":"trace[797057101] transaction","detail":"{read_only:false; response_revision:1518; number_of_response:1; }","duration":"659.256702ms","start":"2024-07-23T15:40:12.994328Z","end":"2024-07-23T15:40:13.653585Z","steps":["trace[797057101] 'process raft request'  (duration: 658.787175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:40:13.65432Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T15:40:12.994309Z","time spent":"659.425818ms","remote":"127.0.0.1:43820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-6kuuw6s2m6bejfkf4oreyh44ku\" mod_revision:1510 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-6kuuw6s2m6bejfkf4oreyh44ku\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-6kuuw6s2m6bejfkf4oreyh44ku\" > >"}
	{"level":"info","ts":"2024-07-23T15:40:13.653318Z","caller":"traceutil/trace.go:171","msg":"trace[708546573] linearizableReadLoop","detail":"{readStateIndex:1778; appliedIndex:1777; }","duration":"137.864852ms","start":"2024-07-23T15:40:13.51544Z","end":"2024-07-23T15:40:13.653305Z","steps":["trace[708546573] 'read index received'  (duration: 137.587009ms)","trace[708546573] 'applied index is now lower than readState.Index'  (duration: 277.023µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T15:40:13.654731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.285878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:40:13.654768Z","caller":"traceutil/trace.go:171","msg":"trace[506926392] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1518; }","duration":"139.326586ms","start":"2024-07-23T15:40:13.515431Z","end":"2024-07-23T15:40:13.654758Z","steps":["trace[506926392] 'agreement among raft nodes before linearized reading'  (duration: 139.264594ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:40:13.654882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.081942ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:40:13.654923Z","caller":"traceutil/trace.go:171","msg":"trace[1576544086] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1518; }","duration":"113.128635ms","start":"2024-07-23T15:40:13.541788Z","end":"2024-07-23T15:40:13.654917Z","steps":["trace[1576544086] 'agreement among raft nodes before linearized reading'  (duration: 113.061834ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:40:14.004669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.870707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-07-23T15:40:14.004753Z","caller":"traceutil/trace.go:171","msg":"trace[1293236147] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1518; }","duration":"201.965718ms","start":"2024-07-23T15:40:13.802771Z","end":"2024-07-23T15:40:14.004736Z","steps":["trace[1293236147] 'range keys from in-memory index tree'  (duration: 201.662431ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T15:40:14.134143Z","caller":"traceutil/trace.go:171","msg":"trace[1980121690] transaction","detail":"{read_only:false; response_revision:1519; number_of_response:1; }","duration":"125.026745ms","start":"2024-07-23T15:40:14.009071Z","end":"2024-07-23T15:40:14.134098Z","steps":["trace[1980121690] 'process raft request'  (duration: 124.598998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:40:14.38769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.705375ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17034461933112330600 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c6690e02f130967>","response":"size:39"}
	{"level":"info","ts":"2024-07-23T15:40:14.549504Z","caller":"traceutil/trace.go:171","msg":"trace[1263248729] transaction","detail":"{read_only:false; response_revision:1520; number_of_response:1; }","duration":"161.026931ms","start":"2024-07-23T15:40:14.388457Z","end":"2024-07-23T15:40:14.549484Z","steps":["trace[1263248729] 'process raft request'  (duration: 114.155797ms)","trace[1263248729] 'compare'  (duration: 46.740373ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-23T15:41:00.621745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.547525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:41:00.622094Z","caller":"traceutil/trace.go:171","msg":"trace[704147864] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1556; }","duration":"104.892343ms","start":"2024-07-23T15:41:00.517162Z","end":"2024-07-23T15:41:00.622055Z","steps":["trace[704147864] 'range keys from in-memory index tree'  (duration: 104.351678ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T15:41:00.622573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.903496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T15:41:00.622632Z","caller":"traceutil/trace.go:171","msg":"trace[1439857728] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:1556; }","duration":"161.980196ms","start":"2024-07-23T15:41:00.460638Z","end":"2024-07-23T15:41:00.622619Z","steps":["trace[1439857728] 'count revisions from in-memory index tree'  (duration: 161.842611ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:41:07 up 20 min,  0 users,  load average: 0.21, 0.16, 0.10
	Linux no-preload-543029 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] <==
	W0723 15:36:43.822855       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:36:43.822926       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0723 15:36:43.823911       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 15:36:43.823975       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:37:43.824649       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:37:43.824765       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0723 15:37:43.824652       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:37:43.824830       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0723 15:37:43.826021       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 15:37:43.826045       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0723 15:39:43.827231       1 handler_proxy.go:99] no RequestInfo found in the context
	W0723 15:39:43.827310       1 handler_proxy.go:99] no RequestInfo found in the context
	E0723 15:39:43.827335       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0723 15:39:43.827433       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0723 15:39:43.828485       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0723 15:39:43.828689       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] <==
	E0723 15:35:47.606943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:35:47.633627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:36:17.612709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:36:17.641647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:36:47.619291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:36:47.649003       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:37:17.626806       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:37:17.656327       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:37:30.236494       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-543029"
	I0723 15:37:43.656239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="347.462µs"
	E0723 15:37:47.633017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:37:47.665015       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0723 15:37:54.656389       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="64.052µs"
	E0723 15:38:17.640923       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:38:17.673171       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:38:47.647467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:38:47.681742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:39:17.653422       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:39:17.689467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:39:47.660840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:39:47.699817       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:40:17.669674       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:40:17.708908       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0723 15:40:47.675846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0723 15:40:47.717988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0723 15:21:44.314729       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0723 15:21:44.329783       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.227"]
	E0723 15:21:44.330006       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0723 15:21:44.411670       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0723 15:21:44.411757       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0723 15:21:44.411811       1 server_linux.go:170] "Using iptables Proxier"
	I0723 15:21:44.414903       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0723 15:21:44.415303       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0723 15:21:44.415337       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:44.418406       1 config.go:197] "Starting service config controller"
	I0723 15:21:44.418485       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:21:44.418601       1 config.go:104] "Starting endpoint slice config controller"
	I0723 15:21:44.418665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:21:44.421063       1 config.go:326] "Starting node config controller"
	I0723 15:21:44.421129       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:21:44.519316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:21:44.519853       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:21:44.521246       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] <==
	I0723 15:21:40.560567       1 serving.go:386] Generated self-signed cert in-memory
	I0723 15:21:42.832716       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0723 15:21:42.832760       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:21:42.839433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 15:21:42.839761       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0723 15:21:42.839903       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0723 15:21:42.840263       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0723 15:21:42.841519       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 15:21:42.841548       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:21:42.841908       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0723 15:21:42.841947       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0723 15:21:42.940512       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0723 15:21:42.941895       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:21:42.942258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 23 15:38:38 no-preload-543029 kubelet[1293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:38:38 no-preload-543029 kubelet[1293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:38:38 no-preload-543029 kubelet[1293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:38:38 no-preload-543029 kubelet[1293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:38:48 no-preload-543029 kubelet[1293]: E0723 15:38:48.641170    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:39:02 no-preload-543029 kubelet[1293]: E0723 15:39:02.642139    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:39:13 no-preload-543029 kubelet[1293]: E0723 15:39:13.640046    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:39:27 no-preload-543029 kubelet[1293]: E0723 15:39:27.639398    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:39:38 no-preload-543029 kubelet[1293]: E0723 15:39:38.654371    1293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:39:38 no-preload-543029 kubelet[1293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:39:38 no-preload-543029 kubelet[1293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:39:38 no-preload-543029 kubelet[1293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:39:38 no-preload-543029 kubelet[1293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:39:42 no-preload-543029 kubelet[1293]: E0723 15:39:42.642981    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:39:54 no-preload-543029 kubelet[1293]: E0723 15:39:54.640319    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:40:07 no-preload-543029 kubelet[1293]: E0723 15:40:07.640505    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:40:19 no-preload-543029 kubelet[1293]: E0723 15:40:19.640321    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:40:33 no-preload-543029 kubelet[1293]: E0723 15:40:33.646471    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:40:38 no-preload-543029 kubelet[1293]: E0723 15:40:38.652948    1293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 23 15:40:38 no-preload-543029 kubelet[1293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 23 15:40:38 no-preload-543029 kubelet[1293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 23 15:40:38 no-preload-543029 kubelet[1293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 23 15:40:38 no-preload-543029 kubelet[1293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 23 15:40:46 no-preload-543029 kubelet[1293]: E0723 15:40:46.641117    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	Jul 23 15:41:00 no-preload-543029 kubelet[1293]: E0723 15:41:00.639346    1293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-dsfmg" podUID="98637dfb-5600-4b7d-9272-ac5c5172d67b"
	
	
	==> storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] <==
	I0723 15:21:44.274794       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0723 15:22:14.278531       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] <==
	I0723 15:22:14.915644       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 15:22:14.924832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 15:22:14.924908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 15:22:32.330977       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 15:22:32.331276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-543029_4a75ce72-4451-43bb-bb47-de07b27b1841!
	I0723 15:22:32.332810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e25f0429-873a-43a8-b4e4-8a434517782e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-543029_4a75ce72-4451-43bb-bb47-de07b27b1841 became leader
	I0723 15:22:32.432098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-543029_4a75ce72-4451-43bb-bb47-de07b27b1841!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-543029 -n no-preload-543029
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-543029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-dsfmg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-543029 describe pod metrics-server-78fcd8795b-dsfmg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-543029 describe pod metrics-server-78fcd8795b-dsfmg: exit status 1 (70.293313ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-dsfmg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-543029 describe pod metrics-server-78fcd8795b-dsfmg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (355.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (103.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.51:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (221.95338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-000272" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-000272 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-000272 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.263µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-000272 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (214.551468ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-000272 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-000272 logs -n 25: (1.57016579s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-193974                              | stopped-upgrade-193974       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-543029             | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-543029                                   | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-486436            | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-000272        | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-503350                           | kubernetes-upgrade-503350    | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	| delete  | -p                                                     | disable-driver-mounts-518198 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | disable-driver-mounts-518198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:15 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-543029                  | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-543029 --memory=2200                     | no-preload-543029            | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-486436                 | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-486436                                  | embed-certs-486436           | jenkins | v1.33.1 | 23 Jul 24 15:15 UTC | 23 Jul 24 15:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-911217  | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-000272             | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC | 23 Jul 24 15:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-000272                              | old-k8s-version-000272       | jenkins | v1.33.1 | 23 Jul 24 15:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-911217       | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-911217 | jenkins | v1.33.1 | 23 Jul 24 15:18 UTC | 23 Jul 24 15:25 UTC |
	|         | default-k8s-diff-port-911217                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:18:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:18:41.988416   66641 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:18:41.988512   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988520   66641 out.go:304] Setting ErrFile to fd 2...
	I0723 15:18:41.988525   66641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:18:41.988683   66641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:18:41.989181   66641 out.go:298] Setting JSON to false
	I0723 15:18:41.990049   66641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7268,"bootTime":1721740654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:18:41.990101   66641 start.go:139] virtualization: kvm guest
	I0723 15:18:41.992106   66641 out.go:177] * [default-k8s-diff-port-911217] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:18:41.993366   66641 notify.go:220] Checking for updates...
	I0723 15:18:41.993387   66641 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:18:41.994650   66641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:18:41.995849   66641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:18:41.997045   66641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:18:41.998236   66641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:18:41.999412   66641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:18:42.001155   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:18:42.001533   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.001596   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.016186   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0723 15:18:42.016616   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.017209   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.017230   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.017528   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.017699   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.017927   66641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:18:42.018205   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:18:42.018235   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:18:42.032467   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0723 15:18:42.032800   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:18:42.033214   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:18:42.033236   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:18:42.033544   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:18:42.033718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:18:42.065773   66641 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 15:18:42.067127   66641 start.go:297] selected driver: kvm2
	I0723 15:18:42.067142   66641 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.067236   66641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:18:42.067871   66641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.067939   66641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 15:18:42.083220   66641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 15:18:42.083563   66641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:18:42.083627   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:18:42.083641   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:18:42.083677   66641 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:18:42.083772   66641 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:18:42.085608   66641 out.go:177] * Starting "default-k8s-diff-port-911217" primary control-plane node in "default-k8s-diff-port-911217" cluster
	I0723 15:18:42.394642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:42.086917   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:18:42.086954   66641 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 15:18:42.086961   66641 cache.go:56] Caching tarball of preloaded images
	I0723 15:18:42.087024   66641 preload.go:172] Found /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0723 15:18:42.087034   66641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:18:42.087125   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:18:42.087294   66641 start.go:360] acquireMachinesLock for default-k8s-diff-port-911217: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:18:45.466731   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:51.546673   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:18:54.618775   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:00.698667   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:03.770734   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:09.850627   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:12.922681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:19.002679   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:22.074678   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:28.154680   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:31.226704   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:37.306625   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:40.378652   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:46.458657   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:49.530693   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:55.610642   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:19:58.682681   64842 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.227:22: connect: no route to host
	I0723 15:20:01.686613   65177 start.go:364] duration metric: took 4m13.413067096s to acquireMachinesLock for "embed-certs-486436"
	I0723 15:20:01.686692   65177 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:01.686702   65177 fix.go:54] fixHost starting: 
	I0723 15:20:01.687041   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:01.687070   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:01.702700   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0723 15:20:01.703107   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:01.703623   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:20:01.703649   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:01.704019   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:01.704222   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:01.704417   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:20:01.706547   65177 fix.go:112] recreateIfNeeded on embed-certs-486436: state=Stopped err=<nil>
	I0723 15:20:01.706583   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	W0723 15:20:01.706810   65177 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:01.708411   65177 out.go:177] * Restarting existing kvm2 VM for "embed-certs-486436" ...
	I0723 15:20:01.709393   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Start
	I0723 15:20:01.709559   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring networks are active...
	I0723 15:20:01.710353   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network default is active
	I0723 15:20:01.710733   65177 main.go:141] libmachine: (embed-certs-486436) Ensuring network mk-embed-certs-486436 is active
	I0723 15:20:01.711060   65177 main.go:141] libmachine: (embed-certs-486436) Getting domain xml...
	I0723 15:20:01.711832   65177 main.go:141] libmachine: (embed-certs-486436) Creating domain...
	I0723 15:20:02.915930   65177 main.go:141] libmachine: (embed-certs-486436) Waiting to get IP...
	I0723 15:20:02.916770   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:02.917115   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:02.917188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:02.917097   66959 retry.go:31] will retry after 245.483954ms: waiting for machine to come up
	I0723 15:20:01.683920   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:01.683992   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684333   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:20:01.684360   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:20:01.684537   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:20:01.686489   64842 machine.go:97] duration metric: took 4m34.539799868s to provisionDockerMachine
	I0723 15:20:01.686530   64842 fix.go:56] duration metric: took 4m34.563243323s for fixHost
	I0723 15:20:01.686547   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 4m34.563294357s
	W0723 15:20:01.686572   64842 start.go:714] error starting host: provision: host is not running
	W0723 15:20:01.686657   64842 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0723 15:20:01.686668   64842 start.go:729] Will try again in 5 seconds ...
	I0723 15:20:03.164587   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.165021   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.165067   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.164972   66959 retry.go:31] will retry after 387.950176ms: waiting for machine to come up
	I0723 15:20:03.554705   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.555161   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.555188   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.555103   66959 retry.go:31] will retry after 404.807138ms: waiting for machine to come up
	I0723 15:20:03.961830   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:03.962290   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:03.962323   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:03.962236   66959 retry.go:31] will retry after 570.61318ms: waiting for machine to come up
	I0723 15:20:04.534152   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:04.534702   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:04.534731   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:04.534650   66959 retry.go:31] will retry after 542.857217ms: waiting for machine to come up
	I0723 15:20:05.079445   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.079866   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.079894   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.079811   66959 retry.go:31] will retry after 653.88428ms: waiting for machine to come up
	I0723 15:20:05.735919   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:05.736350   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:05.736381   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:05.736331   66959 retry.go:31] will retry after 871.798617ms: waiting for machine to come up
	I0723 15:20:06.609428   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:06.609885   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:06.609908   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:06.609854   66959 retry.go:31] will retry after 1.079464189s: waiting for machine to come up
	I0723 15:20:07.690706   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:07.691096   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:07.691122   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:07.691070   66959 retry.go:31] will retry after 1.414145571s: waiting for machine to come up
	I0723 15:20:06.687299   64842 start.go:360] acquireMachinesLock for no-preload-543029: {Name:mkc8ffa8d1a7cde4fb65dc3bdbc209df14d9c326 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0723 15:20:09.107698   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:09.108062   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:09.108091   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:09.108012   66959 retry.go:31] will retry after 2.263313118s: waiting for machine to come up
	I0723 15:20:11.374573   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:11.375009   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:11.375035   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:11.374970   66959 retry.go:31] will retry after 2.600297505s: waiting for machine to come up
	I0723 15:20:13.978265   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:13.978707   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:13.978733   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:13.978653   66959 retry.go:31] will retry after 2.515380756s: waiting for machine to come up
	I0723 15:20:16.497458   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:16.497913   65177 main.go:141] libmachine: (embed-certs-486436) DBG | unable to find current IP address of domain embed-certs-486436 in network mk-embed-certs-486436
	I0723 15:20:16.497945   65177 main.go:141] libmachine: (embed-certs-486436) DBG | I0723 15:20:16.497872   66959 retry.go:31] will retry after 3.863044954s: waiting for machine to come up
	I0723 15:20:21.587107   65605 start.go:364] duration metric: took 3m54.633068774s to acquireMachinesLock for "old-k8s-version-000272"
	I0723 15:20:21.587168   65605 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:21.587179   65605 fix.go:54] fixHost starting: 
	I0723 15:20:21.587596   65605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:21.587632   65605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:21.608083   65605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0723 15:20:21.608563   65605 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:21.609109   65605 main.go:141] libmachine: Using API Version  1
	I0723 15:20:21.609148   65605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:21.609463   65605 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:21.609679   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:21.609839   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetState
	I0723 15:20:21.611555   65605 fix.go:112] recreateIfNeeded on old-k8s-version-000272: state=Stopped err=<nil>
	I0723 15:20:21.611590   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	W0723 15:20:21.611766   65605 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:21.614168   65605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-000272" ...
	I0723 15:20:21.615607   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .Start
	I0723 15:20:21.615831   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring networks are active...
	I0723 15:20:21.616640   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network default is active
	I0723 15:20:21.617122   65605 main.go:141] libmachine: (old-k8s-version-000272) Ensuring network mk-old-k8s-version-000272 is active
	I0723 15:20:21.617591   65605 main.go:141] libmachine: (old-k8s-version-000272) Getting domain xml...
	I0723 15:20:21.618346   65605 main.go:141] libmachine: (old-k8s-version-000272) Creating domain...
	I0723 15:20:20.365141   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365653   65177 main.go:141] libmachine: (embed-certs-486436) Found IP for machine: 192.168.39.200
	I0723 15:20:20.365671   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.365677   65177 main.go:141] libmachine: (embed-certs-486436) Reserving static IP address...
	I0723 15:20:20.366319   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.366340   65177 main.go:141] libmachine: (embed-certs-486436) DBG | skip adding static IP to network mk-embed-certs-486436 - found existing host DHCP lease matching {name: "embed-certs-486436", mac: "52:54:00:2e:49:db", ip: "192.168.39.200"}
	I0723 15:20:20.366351   65177 main.go:141] libmachine: (embed-certs-486436) Reserved static IP address: 192.168.39.200
	I0723 15:20:20.366360   65177 main.go:141] libmachine: (embed-certs-486436) Waiting for SSH to be available...
	I0723 15:20:20.366367   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Getting to WaitForSSH function...
	I0723 15:20:20.368870   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369217   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.369239   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.369431   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH client type: external
	I0723 15:20:20.369462   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa (-rw-------)
	I0723 15:20:20.369485   65177 main.go:141] libmachine: (embed-certs-486436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:20.369495   65177 main.go:141] libmachine: (embed-certs-486436) DBG | About to run SSH command:
	I0723 15:20:20.369505   65177 main.go:141] libmachine: (embed-certs-486436) DBG | exit 0
	I0723 15:20:20.494158   65177 main.go:141] libmachine: (embed-certs-486436) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:20.494591   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetConfigRaw
	I0723 15:20:20.495255   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.497821   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498094   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.498124   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.498346   65177 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/config.json ...
	I0723 15:20:20.498558   65177 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:20.498577   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:20.498808   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.500819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501138   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.501166   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.501276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.501481   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501643   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.501770   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.501926   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.502215   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.502231   65177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:20.606234   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:20.606264   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606556   65177 buildroot.go:166] provisioning hostname "embed-certs-486436"
	I0723 15:20:20.606598   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.606793   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.609446   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609801   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.609838   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.609990   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.610137   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610276   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.610468   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.610650   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.610813   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.610825   65177 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-486436 && echo "embed-certs-486436" | sudo tee /etc/hostname
	I0723 15:20:20.727215   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-486436
	
	I0723 15:20:20.727239   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.730058   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730363   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.730411   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.730552   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.730741   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.730911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.731048   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.731204   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:20.731364   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:20.731380   65177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-486436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-486436/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-486436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:20.844079   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:20.844109   65177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:20.844128   65177 buildroot.go:174] setting up certificates
	I0723 15:20:20.844135   65177 provision.go:84] configureAuth start
	I0723 15:20:20.844145   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetMachineName
	I0723 15:20:20.844400   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:20.846867   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847192   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.847220   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.847342   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.849457   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849786   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.849829   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.849937   65177 provision.go:143] copyHostCerts
	I0723 15:20:20.849992   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:20.850002   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:20.850068   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:20.850164   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:20.850172   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:20.850201   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:20.850263   65177 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:20.850272   65177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:20.850293   65177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:20.850358   65177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.embed-certs-486436 san=[127.0.0.1 192.168.39.200 embed-certs-486436 localhost minikube]
	I0723 15:20:20.945454   65177 provision.go:177] copyRemoteCerts
	I0723 15:20:20.945511   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:20.945536   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:20.948316   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948605   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:20.948639   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:20.948797   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:20.948981   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:20.949142   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:20.949267   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.032367   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:20:21.054529   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:21.076049   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0723 15:20:21.098274   65177 provision.go:87] duration metric: took 254.126202ms to configureAuth
	I0723 15:20:21.098303   65177 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:21.098510   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:20:21.098600   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.100971   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101307   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.101341   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.101520   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.101687   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.101828   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.102031   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.102187   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.102375   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.102418   65177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:21.359179   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:21.359214   65177 machine.go:97] duration metric: took 860.640697ms to provisionDockerMachine
	I0723 15:20:21.359230   65177 start.go:293] postStartSetup for "embed-certs-486436" (driver="kvm2")
	I0723 15:20:21.359244   65177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:21.359265   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.359777   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:21.359804   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.362611   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.362936   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.362963   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.363138   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.363311   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.363497   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.363669   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.444572   65177 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:21.448633   65177 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:21.448662   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:21.448733   65177 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:21.448817   65177 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:21.448925   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:21.457699   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:21.480387   65177 start.go:296] duration metric: took 121.140622ms for postStartSetup
	I0723 15:20:21.480431   65177 fix.go:56] duration metric: took 19.793728867s for fixHost
	I0723 15:20:21.480449   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.483324   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483667   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.483690   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.483854   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.484057   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484211   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.484353   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.484516   65177 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:21.484692   65177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0723 15:20:21.484703   65177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:21.586960   65177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748021.563549452
	
	I0723 15:20:21.586982   65177 fix.go:216] guest clock: 1721748021.563549452
	I0723 15:20:21.586989   65177 fix.go:229] Guest: 2024-07-23 15:20:21.563549452 +0000 UTC Remote: 2024-07-23 15:20:21.480435025 +0000 UTC m=+273.351160165 (delta=83.114427ms)
	I0723 15:20:21.587010   65177 fix.go:200] guest clock delta is within tolerance: 83.114427ms
	I0723 15:20:21.587016   65177 start.go:83] releasing machines lock for "embed-certs-486436", held for 19.900344761s
	I0723 15:20:21.587045   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.587363   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:21.590600   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.590998   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.591041   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.591194   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591723   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591911   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:20:21.591965   65177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:21.592024   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.592172   65177 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:21.592190   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:20:21.594877   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595266   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595337   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595387   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595502   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595698   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.595751   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:21.595776   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:21.595837   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.595909   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:20:21.595998   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.596083   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:20:21.596218   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:20:21.596369   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:20:21.709871   65177 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:21.717210   65177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:21.866461   65177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:21.871904   65177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:21.871979   65177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:21.888197   65177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:21.888226   65177 start.go:495] detecting cgroup driver to use...
	I0723 15:20:21.888339   65177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:21.903857   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:21.917841   65177 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:21.917917   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:21.935814   65177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:21.949898   65177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:22.066137   65177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:22.208517   65177 docker.go:233] disabling docker service ...
	I0723 15:20:22.208606   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:22.222583   65177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:22.235322   65177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:22.380324   65177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:22.513404   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:22.529676   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:22.546980   65177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:20:22.547050   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.556656   65177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:22.556723   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.566410   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.576269   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.586125   65177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:22.597824   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.608136   65177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.628391   65177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:22.642862   65177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:22.652564   65177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:22.652625   65177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:22.667485   65177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:22.677669   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:22.809762   65177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:22.947870   65177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:22.947955   65177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:22.952570   65177 start.go:563] Will wait 60s for crictl version
	I0723 15:20:22.952672   65177 ssh_runner.go:195] Run: which crictl
	I0723 15:20:22.956658   65177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:22.997591   65177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:22.997719   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.030830   65177 ssh_runner.go:195] Run: crio --version
	I0723 15:20:23.060406   65177 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:20:23.061617   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetIP
	I0723 15:20:23.065154   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065547   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:20:23.065572   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:20:23.065845   65177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:23.070019   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:23.082226   65177 kubeadm.go:883] updating cluster {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:23.082414   65177 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:20:23.082490   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:23.117427   65177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:20:23.117505   65177 ssh_runner.go:195] Run: which lz4
	I0723 15:20:23.121380   65177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:23.125694   65177 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:23.125721   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:20:22.904910   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting to get IP...
	I0723 15:20:22.905969   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:22.906448   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:22.906508   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:22.906424   67094 retry.go:31] will retry after 215.638875ms: waiting for machine to come up
	I0723 15:20:23.124008   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.124474   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.124510   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.124440   67094 retry.go:31] will retry after 380.753429ms: waiting for machine to come up
	I0723 15:20:23.507362   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.507777   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.507803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.507744   67094 retry.go:31] will retry after 385.253161ms: waiting for machine to come up
	I0723 15:20:23.894227   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:23.894675   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:23.894697   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:23.894627   67094 retry.go:31] will retry after 533.715559ms: waiting for machine to come up
	I0723 15:20:24.429811   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:24.430290   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:24.430321   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:24.430242   67094 retry.go:31] will retry after 637.033889ms: waiting for machine to come up
	I0723 15:20:25.068770   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.069313   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.069345   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.069274   67094 retry.go:31] will retry after 796.484567ms: waiting for machine to come up
	I0723 15:20:25.867223   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:25.867663   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:25.867693   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:25.867604   67094 retry.go:31] will retry after 845.920319ms: waiting for machine to come up
	I0723 15:20:26.715077   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:26.715612   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:26.715643   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:26.715566   67094 retry.go:31] will retry after 1.265268276s: waiting for machine to come up
	I0723 15:20:24.399306   65177 crio.go:462] duration metric: took 1.277970642s to copy over tarball
	I0723 15:20:24.399409   65177 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:26.603797   65177 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204354868s)
	I0723 15:20:26.603830   65177 crio.go:469] duration metric: took 2.204493799s to extract the tarball
	I0723 15:20:26.603839   65177 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:26.641498   65177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:26.682771   65177 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:20:26.682793   65177 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:20:26.682802   65177 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0723 15:20:26.682948   65177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-486436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:26.683021   65177 ssh_runner.go:195] Run: crio config
	I0723 15:20:26.734908   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:26.734934   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:26.734947   65177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:26.734979   65177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-486436 NodeName:embed-certs-486436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:20:26.735162   65177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-486436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:26.735247   65177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:20:26.746266   65177 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:26.746334   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:26.756387   65177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0723 15:20:26.771870   65177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:26.789639   65177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0723 15:20:26.807608   65177 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:26.811134   65177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:26.823851   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:26.952899   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:26.969453   65177 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436 for IP: 192.168.39.200
	I0723 15:20:26.969484   65177 certs.go:194] generating shared ca certs ...
	I0723 15:20:26.969503   65177 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:26.969694   65177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:26.969757   65177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:26.969770   65177 certs.go:256] generating profile certs ...
	I0723 15:20:26.969897   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/client.key
	I0723 15:20:26.969978   65177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key.8481dffb
	I0723 15:20:26.970038   65177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key
	I0723 15:20:26.970164   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:26.970203   65177 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:26.970216   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:26.970255   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:26.970279   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:26.970309   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:26.970369   65177 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:26.971269   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:27.026302   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:27.075563   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:27.109194   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:27.136748   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0723 15:20:27.159391   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:20:27.181933   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:27.203549   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/embed-certs-486436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:27.225473   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:27.254497   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:27.275874   65177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:27.299275   65177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:27.316223   65177 ssh_runner.go:195] Run: openssl version
	I0723 15:20:27.322037   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:27.333546   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337890   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.337945   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:27.343624   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:27.354738   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:27.365915   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370038   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.370101   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:27.375514   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:27.386502   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:27.396611   65177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400879   65177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.400978   65177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:27.406132   65177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:27.415738   65177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:27.419755   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:27.424982   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:27.430277   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:27.435794   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:27.441244   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:27.446515   65177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:27.451968   65177 kubeadm.go:392] StartCluster: {Name:embed-certs-486436 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-486436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:27.452053   65177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:27.452102   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.488671   65177 cri.go:89] found id: ""
	I0723 15:20:27.488758   65177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:27.498621   65177 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:27.498639   65177 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:27.498690   65177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:27.510485   65177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:27.511796   65177 kubeconfig.go:125] found "embed-certs-486436" server: "https://192.168.39.200:8443"
	I0723 15:20:27.513749   65177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:27.525206   65177 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I0723 15:20:27.525258   65177 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:27.525275   65177 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:27.525354   65177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:27.563337   65177 cri.go:89] found id: ""
	I0723 15:20:27.563411   65177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:27.583886   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:27.595493   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:27.595513   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:27.595591   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:27.606537   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:27.606596   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:27.616130   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:27.624277   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:27.624335   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:27.632787   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.641057   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:27.641113   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:27.649516   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:27.657977   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:27.658021   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:27.666489   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:27.675023   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.777750   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:27.982818   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:27.983136   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:27.983157   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:27.983112   67094 retry.go:31] will retry after 1.681215174s: waiting for machine to come up
	I0723 15:20:29.667369   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:29.667816   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:29.667846   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:29.667773   67094 retry.go:31] will retry after 1.742302977s: waiting for machine to come up
	I0723 15:20:31.412567   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:31.413046   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:31.413074   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:31.412990   67094 retry.go:31] will retry after 2.618033682s: waiting for machine to come up
	I0723 15:20:28.659756   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.867793   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:28.952107   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:29.020498   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:29.020632   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:29.521001   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.021488   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:30.520765   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.021749   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.521145   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:31.535745   65177 api_server.go:72] duration metric: took 2.515246955s to wait for apiserver process to appear ...
	I0723 15:20:31.535779   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:20:31.535802   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.561351   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.561400   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:33.561416   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:33.580699   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:20:33.580735   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:20:34.036231   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.045563   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.045603   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:34.536119   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:34.549417   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:20:34.549447   65177 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:20:35.035956   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:20:35.040331   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:20:35.046883   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:20:35.046909   65177 api_server.go:131] duration metric: took 3.511123729s to wait for apiserver health ...
	I0723 15:20:35.046918   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:20:35.046924   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:35.048858   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:20:34.034295   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:34.034660   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:34.034682   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:34.034634   67094 retry.go:31] will retry after 2.832404848s: waiting for machine to come up
	I0723 15:20:35.050411   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:20:35.061924   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:20:35.088990   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:20:35.102746   65177 system_pods.go:59] 8 kube-system pods found
	I0723 15:20:35.102778   65177 system_pods.go:61] "coredns-7db6d8ff4d-v842j" [f3509de1-edf7-46c4-af5b-89338770d2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:20:35.102786   65177 system_pods.go:61] "etcd-embed-certs-486436" [46b72abd-c16d-452d-8c17-909fd2a25fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:20:35.102796   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [2ce2344f-5ddc-438b-8f16-338bc266da83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:20:35.102804   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [3f483328-583f-4c71-8372-db418f593b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:20:35.102812   65177 system_pods.go:61] "kube-proxy-f4vfh" [00e430df-ccc5-463d-96f9-288e2e611e2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:20:35.102822   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [0c581c3d-78ab-47d8-81a8-9d176192a94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:20:35.102829   65177 system_pods.go:61] "metrics-server-569cc877fc-rq67z" [b6371591-2fac-47f5-b20b-635c9f0755c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:20:35.102840   65177 system_pods.go:61] "storage-provisioner" [a0545674-2bfc-48b4-940e-cdedf02c5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:20:35.102849   65177 system_pods.go:74] duration metric: took 13.834305ms to wait for pod list to return data ...
	I0723 15:20:35.102857   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:20:35.106953   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:20:35.106977   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:20:35.106991   65177 node_conditions.go:105] duration metric: took 4.127613ms to run NodePressure ...
	I0723 15:20:35.107010   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:35.395355   65177 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399496   65177 kubeadm.go:739] kubelet initialised
	I0723 15:20:35.399514   65177 kubeadm.go:740] duration metric: took 4.133847ms waiting for restarted kubelet to initialise ...
	I0723 15:20:35.399521   65177 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:20:35.404293   65177 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.408404   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408423   65177 pod_ready.go:81] duration metric: took 4.111276ms for pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.408431   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "coredns-7db6d8ff4d-v842j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.408440   65177 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.412361   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412379   65177 pod_ready.go:81] duration metric: took 3.929729ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.412391   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "etcd-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.412403   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.416588   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416603   65177 pod_ready.go:81] duration metric: took 4.193735ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.416610   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.416616   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.492691   65177 pod_ready.go:97] node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492715   65177 pod_ready.go:81] duration metric: took 76.092496ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	E0723 15:20:35.492724   65177 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-486436" hosting pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-486436" has status "Ready":"False"
	I0723 15:20:35.492731   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892820   65177 pod_ready.go:92] pod "kube-proxy-f4vfh" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:35.892843   65177 pod_ready.go:81] duration metric: took 400.103193ms for pod "kube-proxy-f4vfh" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:35.892853   65177 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:37.898159   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:36.869147   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:36.869555   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | unable to find current IP address of domain old-k8s-version-000272 in network mk-old-k8s-version-000272
	I0723 15:20:36.869593   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | I0723 15:20:36.869499   67094 retry.go:31] will retry after 4.334096738s: waiting for machine to come up
	I0723 15:20:41.208992   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209340   65605 main.go:141] libmachine: (old-k8s-version-000272) Found IP for machine: 192.168.50.51
	I0723 15:20:41.209364   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserving static IP address...
	I0723 15:20:41.209382   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has current primary IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.209808   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.209843   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | skip adding static IP to network mk-old-k8s-version-000272 - found existing host DHCP lease matching {name: "old-k8s-version-000272", mac: "52:54:00:90:92:e1", ip: "192.168.50.51"}
	I0723 15:20:41.209862   65605 main.go:141] libmachine: (old-k8s-version-000272) Reserved static IP address: 192.168.50.51
	I0723 15:20:41.209878   65605 main.go:141] libmachine: (old-k8s-version-000272) Waiting for SSH to be available...
	I0723 15:20:41.209916   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Getting to WaitForSSH function...
	I0723 15:20:41.211671   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.211918   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.211956   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.212110   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH client type: external
	I0723 15:20:41.212139   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa (-rw-------)
	I0723 15:20:41.212191   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:20:41.212211   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | About to run SSH command:
	I0723 15:20:41.212229   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | exit 0
	I0723 15:20:41.334852   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | SSH cmd err, output: <nil>: 
	I0723 15:20:41.335260   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetConfigRaw
	I0723 15:20:41.335965   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.338425   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.338803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.338842   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.339024   65605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/config.json ...
	I0723 15:20:41.339218   65605 machine.go:94] provisionDockerMachine start ...
	I0723 15:20:41.339235   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:41.339476   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.341528   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.341881   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.341909   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.342008   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.342192   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342352   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.342502   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.342674   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.342855   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.342865   65605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:20:41.442564   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:20:41.442592   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.442857   65605 buildroot.go:166] provisioning hostname "old-k8s-version-000272"
	I0723 15:20:41.442872   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.443076   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.445976   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446389   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.446429   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.446553   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.446719   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.446972   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.447096   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.447249   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.447418   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.447434   65605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-000272 && echo "old-k8s-version-000272" | sudo tee /etc/hostname
	I0723 15:20:41.559708   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-000272
	
	I0723 15:20:41.559739   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.562630   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.562954   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.562977   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.563156   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.563340   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563501   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.563596   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.563779   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.563977   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.564006   65605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-000272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-000272/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-000272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:20:41.671327   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:20:41.671363   65605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:20:41.671396   65605 buildroot.go:174] setting up certificates
	I0723 15:20:41.671407   65605 provision.go:84] configureAuth start
	I0723 15:20:41.671418   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetMachineName
	I0723 15:20:41.671766   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:41.674340   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.674812   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.674848   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.675019   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.677052   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677386   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.677418   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.677568   65605 provision.go:143] copyHostCerts
	I0723 15:20:41.677636   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:20:41.677651   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:20:41.677715   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:20:41.677826   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:20:41.677836   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:20:41.677866   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:20:41.677939   65605 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:20:41.677949   65605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:20:41.677975   65605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:20:41.678039   65605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-000272 san=[127.0.0.1 192.168.50.51 localhost minikube old-k8s-version-000272]
	I0723 15:20:41.745999   65605 provision.go:177] copyRemoteCerts
	I0723 15:20:41.746077   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:20:41.746123   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.748908   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749226   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.749252   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.749417   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.749616   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.749771   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.749903   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:41.828867   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:20:42.386874   66641 start.go:364] duration metric: took 2m0.299552173s to acquireMachinesLock for "default-k8s-diff-port-911217"
	I0723 15:20:42.386943   66641 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:20:42.386951   66641 fix.go:54] fixHost starting: 
	I0723 15:20:42.387316   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:20:42.387356   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:20:42.405492   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0723 15:20:42.405947   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:20:42.406493   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:20:42.406517   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:20:42.406843   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:20:42.407031   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:20:42.407169   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:20:42.408621   66641 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911217: state=Stopped err=<nil>
	I0723 15:20:42.408657   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	W0723 15:20:42.408798   66641 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:20:42.410540   66641 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-911217" ...
	I0723 15:20:39.899515   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.903102   65177 pod_ready.go:102] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:41.852296   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0723 15:20:41.874579   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:20:41.897065   65605 provision.go:87] duration metric: took 225.644058ms to configureAuth
	I0723 15:20:41.897095   65605 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:20:41.897287   65605 config.go:182] Loaded profile config "old-k8s-version-000272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:20:41.897354   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:41.900232   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902335   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:41.902328   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:41.902412   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:41.902623   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.902826   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:41.903015   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:41.903209   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:41.903388   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:41.903407   65605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:20:42.162998   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:20:42.163019   65605 machine.go:97] duration metric: took 823.789368ms to provisionDockerMachine
	I0723 15:20:42.163030   65605 start.go:293] postStartSetup for "old-k8s-version-000272" (driver="kvm2")
	I0723 15:20:42.163040   65605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:20:42.163054   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.163444   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:20:42.163471   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.166193   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.166628   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.166670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.166842   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.167037   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.167181   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.248364   65605 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:20:42.252403   65605 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:20:42.252433   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:20:42.252504   65605 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:20:42.252596   65605 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:20:42.252693   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:20:42.262571   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:42.285115   65605 start.go:296] duration metric: took 122.072017ms for postStartSetup
	I0723 15:20:42.285160   65605 fix.go:56] duration metric: took 20.697977265s for fixHost
	I0723 15:20:42.285180   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.287760   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288032   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.288062   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.288187   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.288428   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288606   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.288799   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.289000   65605 main.go:141] libmachine: Using SSH client type: native
	I0723 15:20:42.289216   65605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.51 22 <nil> <nil>}
	I0723 15:20:42.289232   65605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:20:42.386682   65605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748042.363547028
	
	I0723 15:20:42.386711   65605 fix.go:216] guest clock: 1721748042.363547028
	I0723 15:20:42.386723   65605 fix.go:229] Guest: 2024-07-23 15:20:42.363547028 +0000 UTC Remote: 2024-07-23 15:20:42.285164316 +0000 UTC m=+255.470399434 (delta=78.382712ms)
	I0723 15:20:42.386754   65605 fix.go:200] guest clock delta is within tolerance: 78.382712ms
	I0723 15:20:42.386765   65605 start.go:83] releasing machines lock for "old-k8s-version-000272", held for 20.799620907s
	I0723 15:20:42.386796   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.387067   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:42.390116   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390543   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.390589   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.390703   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391215   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391395   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .DriverName
	I0723 15:20:42.391482   65605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:20:42.391527   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.391645   65605 ssh_runner.go:195] Run: cat /version.json
	I0723 15:20:42.391670   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHHostname
	I0723 15:20:42.394373   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394732   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.394757   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394803   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.394924   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395081   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395245   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.395286   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:42.395331   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:42.395428   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.395579   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHPort
	I0723 15:20:42.395726   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHKeyPath
	I0723 15:20:42.395963   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetSSHUsername
	I0723 15:20:42.396145   65605 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/old-k8s-version-000272/id_rsa Username:docker}
	I0723 15:20:42.499940   65605 ssh_runner.go:195] Run: systemctl --version
	I0723 15:20:42.505917   65605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:42.646731   65605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:20:42.652550   65605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:20:42.652612   65605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:20:42.667337   65605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:20:42.667357   65605 start.go:495] detecting cgroup driver to use...
	I0723 15:20:42.667419   65605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:20:42.681839   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:20:42.694833   65605 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:20:42.694888   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:20:42.707800   65605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:20:42.720914   65605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:20:42.844082   65605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:20:43.024993   65605 docker.go:233] disabling docker service ...
	I0723 15:20:43.025076   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:20:43.057263   65605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:20:43.070881   65605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:20:43.180616   65605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:20:43.295769   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:20:43.311341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:20:43.333719   65605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0723 15:20:43.333787   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.345261   65605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:20:43.345364   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.356669   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.366947   65605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:20:43.378177   65605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:20:43.390672   65605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:20:43.400591   65605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:20:43.400645   65605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:20:43.413974   65605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:20:43.423528   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:43.545030   65605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:20:43.685902   65605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:20:43.686018   65605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:20:43.691692   65605 start.go:563] Will wait 60s for crictl version
	I0723 15:20:43.691742   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:43.695470   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:20:43.733229   65605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:20:43.733329   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.765591   65605 ssh_runner.go:195] Run: crio --version
	I0723 15:20:43.794762   65605 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0723 15:20:43.796073   65605 main.go:141] libmachine: (old-k8s-version-000272) Calling .GetIP
	I0723 15:20:43.799075   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799549   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:92:e1", ip: ""} in network mk-old-k8s-version-000272: {Iface:virbr4 ExpiryTime:2024-07-23 16:20:32 +0000 UTC Type:0 Mac:52:54:00:90:92:e1 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:old-k8s-version-000272 Clientid:01:52:54:00:90:92:e1}
	I0723 15:20:43.799585   65605 main.go:141] libmachine: (old-k8s-version-000272) DBG | domain old-k8s-version-000272 has defined IP address 192.168.50.51 and MAC address 52:54:00:90:92:e1 in network mk-old-k8s-version-000272
	I0723 15:20:43.799780   65605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0723 15:20:43.803604   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:43.818919   65605 kubeadm.go:883] updating cluster {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:20:43.819019   65605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 15:20:43.819073   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:43.872208   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:43.872268   65605 ssh_runner.go:195] Run: which lz4
	I0723 15:20:43.876273   65605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:20:43.880532   65605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:20:43.880566   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0723 15:20:45.299916   65605 crio.go:462] duration metric: took 1.423681931s to copy over tarball
	I0723 15:20:45.299989   65605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:20:42.411787   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Start
	I0723 15:20:42.411942   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring networks are active...
	I0723 15:20:42.412743   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network default is active
	I0723 15:20:42.413086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Ensuring network mk-default-k8s-diff-port-911217 is active
	I0723 15:20:42.413500   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Getting domain xml...
	I0723 15:20:42.414312   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Creating domain...
	I0723 15:20:43.688063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting to get IP...
	I0723 15:20:43.689007   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689403   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.689503   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.689396   67258 retry.go:31] will retry after 291.635723ms: waiting for machine to come up
	I0723 15:20:43.982895   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:43.983344   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:43.983269   67258 retry.go:31] will retry after 315.035251ms: waiting for machine to come up
	I0723 15:20:44.300029   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300502   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.300544   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.300453   67258 retry.go:31] will retry after 314.08729ms: waiting for machine to come up
	I0723 15:20:44.615873   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616274   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:44.616299   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:44.616221   67258 retry.go:31] will retry after 424.738509ms: waiting for machine to come up
	I0723 15:20:45.042987   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043464   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.043522   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.043438   67258 retry.go:31] will retry after 711.273362ms: waiting for machine to come up
	I0723 15:20:45.755790   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756332   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:45.756366   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:45.756261   67258 retry.go:31] will retry after 880.333826ms: waiting for machine to come up
	I0723 15:20:46.638270   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638815   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:46.638859   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:46.638766   67258 retry.go:31] will retry after 733.311982ms: waiting for machine to come up
	I0723 15:20:43.398761   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:20:43.398790   65177 pod_ready.go:81] duration metric: took 7.505930182s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:43.398803   65177 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	I0723 15:20:45.406572   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:47.406841   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:48.176598   65605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87658172s)
	I0723 15:20:48.176623   65605 crio.go:469] duration metric: took 2.876682557s to extract the tarball
	I0723 15:20:48.176632   65605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:20:48.221431   65605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:20:48.256729   65605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0723 15:20:48.256750   65605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:20:48.256833   65605 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.256883   65605 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.256906   65605 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.256840   65605 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.256896   65605 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.256841   65605 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.256851   65605 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0723 15:20:48.256858   65605 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258836   65605 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.258855   65605 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.258867   65605 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0723 15:20:48.258913   65605 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.258840   65605 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.258841   65605 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.258842   65605 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:48.258906   65605 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.548121   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.552098   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.552418   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.560834   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.580417   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0723 15:20:48.590031   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.619770   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.633302   65605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0723 15:20:48.633365   65605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.633414   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.660305   65605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0723 15:20:48.660383   65605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.660439   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.691792   65605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0723 15:20:48.691853   65605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.691902   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707832   65605 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0723 15:20:48.707867   65605 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0723 15:20:48.707901   65605 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0723 15:20:48.707917   65605 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.707945   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.707957   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.722912   65605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0723 15:20:48.722960   65605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.723012   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729754   65605 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0723 15:20:48.729792   65605 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.729820   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0723 15:20:48.729874   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0723 15:20:48.729826   65605 ssh_runner.go:195] Run: which crictl
	I0723 15:20:48.729827   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0723 15:20:48.730025   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0723 15:20:48.730037   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0723 15:20:48.730113   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0723 15:20:48.848335   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0723 15:20:48.849228   65605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0723 15:20:48.849310   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0723 15:20:48.858540   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0723 15:20:48.858650   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0723 15:20:48.858711   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0723 15:20:48.858750   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0723 15:20:48.889577   65605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0723 15:20:49.134808   65605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:20:49.273570   65605 cache_images.go:92] duration metric: took 1.016803126s to LoadCachedImages
	W0723 15:20:49.273670   65605 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0723 15:20:49.273686   65605 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.20.0 crio true true} ...
	I0723 15:20:49.273808   65605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-000272 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:20:49.273902   65605 ssh_runner.go:195] Run: crio config
	I0723 15:20:49.321968   65605 cni.go:84] Creating CNI manager for ""
	I0723 15:20:49.321995   65605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:20:49.322007   65605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:20:49.322028   65605 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-000272 NodeName:old-k8s-version-000272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0723 15:20:49.322208   65605 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-000272"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:20:49.322292   65605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0723 15:20:49.332563   65605 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:20:49.332636   65605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:20:49.345174   65605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0723 15:20:49.364369   65605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:20:49.379807   65605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0723 15:20:49.396643   65605 ssh_runner.go:195] Run: grep 192.168.50.51	control-plane.minikube.internal$ /etc/hosts
	I0723 15:20:49.400437   65605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:20:49.412291   65605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:20:49.539360   65605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:20:49.556165   65605 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272 for IP: 192.168.50.51
	I0723 15:20:49.556198   65605 certs.go:194] generating shared ca certs ...
	I0723 15:20:49.556218   65605 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:49.556393   65605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:20:49.556448   65605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:20:49.556457   65605 certs.go:256] generating profile certs ...
	I0723 15:20:49.556574   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.key
	I0723 15:20:49.556652   65605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key.2c7d9ab3
	I0723 15:20:49.556699   65605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key
	I0723 15:20:49.556845   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:20:49.556900   65605 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:20:49.556913   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:20:49.556947   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:20:49.557001   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:20:49.557036   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:20:49.557087   65605 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:20:49.557993   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:20:49.605662   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:20:49.639122   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:20:49.665264   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:20:49.691008   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0723 15:20:49.723820   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:20:49.750608   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:20:49.776942   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:20:49.809923   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:20:49.834935   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:20:49.857389   65605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:20:49.880619   65605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:20:49.897369   65605 ssh_runner.go:195] Run: openssl version
	I0723 15:20:49.902878   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:20:49.913861   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918296   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.918359   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:20:49.924159   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:20:49.936081   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:20:49.947674   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952040   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.952090   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:20:49.957714   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:20:49.969333   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:20:49.981037   65605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985257   65605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.985303   65605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:20:49.991083   65605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:20:50.002977   65605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:20:50.007497   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:20:50.013359   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:20:50.019202   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:20:50.025182   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:20:50.030979   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:20:50.036818   65605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:20:50.042573   65605 kubeadm.go:392] StartCluster: {Name:old-k8s-version-000272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-000272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:20:50.042687   65605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:20:50.042734   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.084635   65605 cri.go:89] found id: ""
	I0723 15:20:50.084714   65605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:20:50.096501   65605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:20:50.096521   65605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:20:50.096585   65605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:20:50.107443   65605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:20:50.108742   65605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-000272" does not appear in /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:20:50.109665   65605 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-11303/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-000272" cluster setting kubeconfig missing "old-k8s-version-000272" context setting]
	I0723 15:20:50.111089   65605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:20:50.178975   65605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:20:50.190920   65605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
	I0723 15:20:50.190961   65605 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:20:50.190972   65605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:20:50.191033   65605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:20:50.230879   65605 cri.go:89] found id: ""
	I0723 15:20:50.230972   65605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:20:50.247994   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:20:50.257490   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:20:50.257518   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:20:50.257576   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:20:50.266704   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:20:50.266763   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:20:50.276276   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:20:50.285533   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:20:50.285613   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:20:50.294642   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.303358   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:20:50.303414   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:20:50.313060   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:20:50.322294   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:20:50.322364   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:20:50.331659   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:20:50.341120   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:50.460900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.327126   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.576244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.662730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:20:51.762087   65605 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:20:51.762179   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:47.373536   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374064   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:47.374096   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:47.373991   67258 retry.go:31] will retry after 1.176593909s: waiting for machine to come up
	I0723 15:20:48.552701   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553183   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:48.553216   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:48.553135   67258 retry.go:31] will retry after 1.485919187s: waiting for machine to come up
	I0723 15:20:50.040417   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:50.040886   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:50.040808   67258 retry.go:31] will retry after 2.212005186s: waiting for machine to come up
	I0723 15:20:50.444583   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.905273   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:52.262683   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.763266   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.263151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:53.763313   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.262366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:54.763167   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.263068   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:55.762864   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.262305   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:56.762857   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:52.254679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255063   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:52.255094   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:52.255018   67258 retry.go:31] will retry after 2.737596804s: waiting for machine to come up
	I0723 15:20:54.995373   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995679   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:54.995705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:54.995633   67258 retry.go:31] will retry after 2.363037622s: waiting for machine to come up
	I0723 15:20:55.405124   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:20:57.405898   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.767191   64842 start.go:364] duration metric: took 55.07978775s to acquireMachinesLock for "no-preload-543029"
	I0723 15:21:01.767250   64842 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:21:01.767261   64842 fix.go:54] fixHost starting: 
	I0723 15:21:01.767727   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:01.767763   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:01.785721   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I0723 15:21:01.786113   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:01.786792   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:01.786819   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:01.787127   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:01.787328   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:01.787485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:01.789046   64842 fix.go:112] recreateIfNeeded on no-preload-543029: state=Stopped err=<nil>
	I0723 15:21:01.789080   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	W0723 15:21:01.789255   64842 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:21:01.791610   64842 out.go:177] * Restarting existing kvm2 VM for "no-preload-543029" ...
	I0723 15:20:57.263221   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.262445   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:58.762456   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.263288   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:59.763206   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.263158   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:00.762517   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.263183   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:01.762347   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:20:57.362159   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362567   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | unable to find current IP address of domain default-k8s-diff-port-911217 in network mk-default-k8s-diff-port-911217
	I0723 15:20:57.362593   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | I0723 15:20:57.362539   67258 retry.go:31] will retry after 2.888037123s: waiting for machine to come up
	I0723 15:21:00.253973   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.254583   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Found IP for machine: 192.168.61.64
	I0723 15:21:00.254603   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserving static IP address...
	I0723 15:21:00.254630   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has current primary IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.255048   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Reserved static IP address: 192.168.61.64
	I0723 15:21:00.255074   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Waiting for SSH to be available...
	I0723 15:21:00.255105   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.255130   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | skip adding static IP to network mk-default-k8s-diff-port-911217 - found existing host DHCP lease matching {name: "default-k8s-diff-port-911217", mac: "52:54:00:78:3f:f3", ip: "192.168.61.64"}
	I0723 15:21:00.255145   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Getting to WaitForSSH function...
	I0723 15:21:00.257683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258026   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.258054   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.258147   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH client type: external
	I0723 15:21:00.258176   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa (-rw-------)
	I0723 15:21:00.258208   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:00.258220   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | About to run SSH command:
	I0723 15:21:00.258240   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | exit 0
	I0723 15:21:00.382323   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:00.382710   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetConfigRaw
	I0723 15:21:00.383397   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.386258   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.386718   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.386918   66641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/config.json ...
	I0723 15:21:00.387143   66641 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:00.387164   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:00.387412   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.389494   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389798   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.389824   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.389917   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.390082   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390237   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.390438   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.390628   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.390842   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.390857   66641 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:00.486433   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:00.486468   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486725   66641 buildroot.go:166] provisioning hostname "default-k8s-diff-port-911217"
	I0723 15:21:00.486750   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.486948   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.489770   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490120   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.490149   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.490273   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.490475   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490671   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.490882   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.491062   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.491230   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.491246   66641 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-911217 && echo "default-k8s-diff-port-911217" | sudo tee /etc/hostname
	I0723 15:21:00.603917   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-911217
	
	I0723 15:21:00.603953   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.606538   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.606898   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.606943   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.607069   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:00.607306   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607525   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:00.607711   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:00.607920   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:00.608129   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:00.608147   66641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-911217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-911217/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-911217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:00.710852   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:00.710887   66641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:00.710915   66641 buildroot.go:174] setting up certificates
	I0723 15:21:00.710928   66641 provision.go:84] configureAuth start
	I0723 15:21:00.710939   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetMachineName
	I0723 15:21:00.711205   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:00.714141   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714519   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.714552   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.714765   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:00.717395   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:00.717739   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:00.717939   66641 provision.go:143] copyHostCerts
	I0723 15:21:00.718004   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:00.718020   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:00.718115   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:00.718237   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:00.718250   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:00.718284   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:00.718373   66641 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:00.718401   66641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:00.718431   66641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:00.718522   66641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-911217 san=[127.0.0.1 192.168.61.64 default-k8s-diff-port-911217 localhost minikube]
	I0723 15:21:01.133831   66641 provision.go:177] copyRemoteCerts
	I0723 15:21:01.133894   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:01.133919   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.136913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.137359   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.137569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.137782   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.137944   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.138115   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.217531   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:01.241478   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0723 15:21:01.265056   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0723 15:21:01.287281   66641 provision.go:87] duration metric: took 576.341839ms to configureAuth
	I0723 15:21:01.287317   66641 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:01.287496   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:01.287579   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.290157   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290640   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.290668   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.290775   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.290978   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.291315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.291509   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.291673   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.291688   66641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:01.540756   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:01.540783   66641 machine.go:97] duration metric: took 1.153625976s to provisionDockerMachine
	I0723 15:21:01.540796   66641 start.go:293] postStartSetup for "default-k8s-diff-port-911217" (driver="kvm2")
	I0723 15:21:01.540809   66641 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:01.540827   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.541189   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:01.541225   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.544068   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544486   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.544511   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.544600   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.544788   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.544945   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.545154   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.625316   66641 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:01.629446   66641 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:01.629469   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:01.629529   66641 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:01.629634   66641 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:01.629759   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:01.639896   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:01.663515   66641 start.go:296] duration metric: took 122.707128ms for postStartSetup
	I0723 15:21:01.663551   66641 fix.go:56] duration metric: took 19.276599962s for fixHost
	I0723 15:21:01.663569   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.666406   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.666830   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.666861   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.667086   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.667290   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667487   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.667684   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.667895   66641 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:01.668100   66641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.64 22 <nil> <nil>}
	I0723 15:21:01.668116   66641 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:01.767011   66641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748061.738020629
	
	I0723 15:21:01.767035   66641 fix.go:216] guest clock: 1721748061.738020629
	I0723 15:21:01.767043   66641 fix.go:229] Guest: 2024-07-23 15:21:01.738020629 +0000 UTC Remote: 2024-07-23 15:21:01.66355459 +0000 UTC m=+139.710056956 (delta=74.466039ms)
	I0723 15:21:01.767088   66641 fix.go:200] guest clock delta is within tolerance: 74.466039ms
	I0723 15:21:01.767097   66641 start.go:83] releasing machines lock for "default-k8s-diff-port-911217", held for 19.380180818s
	I0723 15:21:01.767122   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.767446   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:01.770143   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770575   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.770607   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.770771   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771336   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771513   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:01.771672   66641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:01.771722   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.771767   66641 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:01.771792   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:01.774913   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775261   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775401   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775440   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775651   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.775783   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:01.775835   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:01.775851   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.775933   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:01.776044   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776119   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:01.776196   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.776293   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:01.776455   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:01.887716   66641 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:01.894935   66641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:20:59.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:01.906133   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:02.040633   66641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:02.047908   66641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:02.047982   66641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:02.067565   66641 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:02.067589   66641 start.go:495] detecting cgroup driver to use...
	I0723 15:21:02.067648   66641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:02.083334   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:02.096435   66641 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:02.096501   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:02.109497   66641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:02.122475   66641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:02.238156   66641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:02.413213   66641 docker.go:233] disabling docker service ...
	I0723 15:21:02.413321   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:02.431076   66641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:02.443590   66641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:02.565848   66641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:02.708530   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:02.724781   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:02.744261   66641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:21:02.744317   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.755864   66641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:02.755939   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.768381   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.779157   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.789500   66641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:02.801063   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.812845   66641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.828742   66641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:02.840605   66641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:02.849796   66641 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:02.849866   66641 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:02.862982   66641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:02.874354   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:03.017881   66641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:03.157623   66641 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:03.157699   66641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:03.162343   66641 start.go:563] Will wait 60s for crictl version
	I0723 15:21:03.162429   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:21:03.166092   66641 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:03.203681   66641 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:03.203775   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.230722   66641 ssh_runner.go:195] Run: crio --version
	I0723 15:21:03.257801   66641 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0723 15:21:01.793112   64842 main.go:141] libmachine: (no-preload-543029) Calling .Start
	I0723 15:21:01.793305   64842 main.go:141] libmachine: (no-preload-543029) Ensuring networks are active...
	I0723 15:21:01.794004   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network default is active
	I0723 15:21:01.794444   64842 main.go:141] libmachine: (no-preload-543029) Ensuring network mk-no-preload-543029 is active
	I0723 15:21:01.794908   64842 main.go:141] libmachine: (no-preload-543029) Getting domain xml...
	I0723 15:21:01.795563   64842 main.go:141] libmachine: (no-preload-543029) Creating domain...
	I0723 15:21:03.126716   64842 main.go:141] libmachine: (no-preload-543029) Waiting to get IP...
	I0723 15:21:03.127667   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.128113   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.128193   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.128095   67435 retry.go:31] will retry after 265.57265ms: waiting for machine to come up
	I0723 15:21:03.395811   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.396355   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.396382   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.396301   67435 retry.go:31] will retry after 304.545362ms: waiting for machine to come up
	I0723 15:21:03.702841   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:03.703303   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:03.703332   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:03.703241   67435 retry.go:31] will retry after 326.35473ms: waiting for machine to come up
	I0723 15:21:04.032032   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.032670   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.032695   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.032568   67435 retry.go:31] will retry after 515.672537ms: waiting for machine to come up
	I0723 15:21:04.550461   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:04.550989   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:04.551019   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:04.550942   67435 retry.go:31] will retry after 735.237546ms: waiting for machine to come up
	I0723 15:21:05.287672   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.288362   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.288393   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.288259   67435 retry.go:31] will retry after 683.55844ms: waiting for machine to come up
	I0723 15:21:02.262289   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:02.763009   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.262852   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.763260   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.262964   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:04.762673   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.263335   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:05.762790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.262830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.762830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:03.259168   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetIP
	I0723 15:21:03.262241   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262705   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:03.262748   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:03.262930   66641 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:03.266969   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:03.278873   66641 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:03.279019   66641 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:21:03.279076   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.318295   66641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0723 15:21:03.318390   66641 ssh_runner.go:195] Run: which lz4
	I0723 15:21:03.322441   66641 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0723 15:21:03.326818   66641 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0723 15:21:03.326857   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0723 15:21:04.624581   66641 crio.go:462] duration metric: took 1.302205276s to copy over tarball
	I0723 15:21:04.624665   66641 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0723 15:21:06.913370   66641 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.288673981s)
	I0723 15:21:06.913403   66641 crio.go:469] duration metric: took 2.288793517s to extract the tarball
	I0723 15:21:06.913413   66641 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0723 15:21:06.951820   66641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:03.906766   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:06.405854   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:05.973409   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:05.973872   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:05.973920   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:05.973856   67435 retry.go:31] will retry after 728.120188ms: waiting for machine to come up
	I0723 15:21:06.703125   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:06.703631   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:06.703661   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:06.703554   67435 retry.go:31] will retry after 1.052851436s: waiting for machine to come up
	I0723 15:21:07.758261   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:07.758823   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:07.758853   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:07.758766   67435 retry.go:31] will retry after 1.533027844s: waiting for machine to come up
	I0723 15:21:09.293721   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:09.294204   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:09.294230   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:09.294169   67435 retry.go:31] will retry after 1.399702148s: waiting for machine to come up
	I0723 15:21:07.262935   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:07.762473   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.262990   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:08.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.262850   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:09.762245   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.263207   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.762516   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.263298   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:11.762853   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:06.993755   66641 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:21:06.993783   66641 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:21:06.993793   66641 kubeadm.go:934] updating node { 192.168.61.64 8444 v1.30.3 crio true true} ...
	I0723 15:21:06.993917   66641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-911217 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:06.993994   66641 ssh_runner.go:195] Run: crio config
	I0723 15:21:07.040966   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:07.040991   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:07.041014   66641 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:07.041040   66641 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.64 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-911217 NodeName:default-k8s-diff-port-911217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:07.041222   66641 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.64
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-911217"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:07.041284   66641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:21:07.051498   66641 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:07.051567   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:07.060752   66641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0723 15:21:07.078362   66641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:21:07.093890   66641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0723 15:21:07.121632   66641 ssh_runner.go:195] Run: grep 192.168.61.64	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:07.126674   66641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:07.139521   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:07.264702   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:07.286475   66641 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217 for IP: 192.168.61.64
	I0723 15:21:07.286499   66641 certs.go:194] generating shared ca certs ...
	I0723 15:21:07.286521   66641 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:07.286750   66641 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:07.286814   66641 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:07.286829   66641 certs.go:256] generating profile certs ...
	I0723 15:21:07.286928   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/client.key
	I0723 15:21:07.286986   66641 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key.a1750142
	I0723 15:21:07.287041   66641 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key
	I0723 15:21:07.287151   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:07.287182   66641 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:07.287191   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:07.287210   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:07.287233   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:07.287257   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:07.287288   66641 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:07.288006   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:07.331680   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:07.378132   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:07.423720   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:07.462077   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0723 15:21:07.489608   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:21:07.511619   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:07.535480   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/default-k8s-diff-port-911217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:07.557870   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:07.579317   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:07.601107   66641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:07.622717   66641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:07.638728   66641 ssh_runner.go:195] Run: openssl version
	I0723 15:21:07.644065   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:07.654161   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658261   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.658335   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:07.663893   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:07.673883   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:07.684409   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688657   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.688710   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:07.694037   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:07.704621   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:07.714866   66641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719090   66641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.719133   66641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:07.724797   66641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:07.734660   66641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:07.739005   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:07.744615   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:07.749912   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:07.755350   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:07.760833   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:07.766701   66641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:07.773611   66641 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-911217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-911217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:07.773724   66641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:07.773788   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.812612   66641 cri.go:89] found id: ""
	I0723 15:21:07.812689   66641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:07.822628   66641 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:07.822648   66641 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:07.822699   66641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:07.831812   66641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:07.833459   66641 kubeconfig.go:125] found "default-k8s-diff-port-911217" server: "https://192.168.61.64:8444"
	I0723 15:21:07.836425   66641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:07.846945   66641 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.64
	I0723 15:21:07.846976   66641 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:07.846989   66641 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:07.847046   66641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:07.881091   66641 cri.go:89] found id: ""
	I0723 15:21:07.881180   66641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:07.900373   66641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:07.912010   66641 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:07.912035   66641 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:07.912092   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0723 15:21:07.920903   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:07.920981   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:07.930186   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0723 15:21:07.938825   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:07.938891   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:07.947852   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.957007   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:07.957076   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:07.966642   66641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0723 15:21:07.975395   66641 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:07.975457   66641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:07.984363   66641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:07.993997   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:08.112135   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.260639   66641 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1484675s)
	I0723 15:21:09.260677   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.481542   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.546998   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:09.657302   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:09.657407   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.157632   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.658193   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:10.694922   66641 api_server.go:72] duration metric: took 1.037619978s to wait for apiserver process to appear ...
	I0723 15:21:10.694957   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:10.694980   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:08.406647   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:10.907117   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:13.783814   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.783855   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:13.783874   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:13.828920   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:13.828952   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:14.195191   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.199330   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.199350   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:14.695758   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:14.703433   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:14.703471   66641 api_server.go:103] status: https://192.168.61.64:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:15.196096   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:21:15.200578   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:21:15.208499   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:21:15.208523   66641 api_server.go:131] duration metric: took 4.513559684s to wait for apiserver health ...
	I0723 15:21:15.208532   66641 cni.go:84] Creating CNI manager for ""
	I0723 15:21:15.208539   66641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:15.210371   66641 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:10.696028   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:10.696532   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:10.696556   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:10.696480   67435 retry.go:31] will retry after 1.754927597s: waiting for machine to come up
	I0723 15:21:12.452705   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:12.453135   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:12.453164   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:12.453082   67435 retry.go:31] will retry after 2.354607493s: waiting for machine to come up
	I0723 15:21:14.809924   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:14.810438   64842 main.go:141] libmachine: (no-preload-543029) DBG | unable to find current IP address of domain no-preload-543029 in network mk-no-preload-543029
	I0723 15:21:14.810467   64842 main.go:141] libmachine: (no-preload-543029) DBG | I0723 15:21:14.810400   67435 retry.go:31] will retry after 4.422072307s: waiting for machine to come up
	I0723 15:21:12.262754   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:12.762339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.262358   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:13.762291   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.262339   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:14.762796   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.263008   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.762225   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.263100   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:16.762356   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:15.211787   66641 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:15.226475   66641 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:15.245284   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:15.253756   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:15.253789   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:15.253798   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:15.253805   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:15.253815   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:15.253822   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:15.253828   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:15.253833   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:15.253838   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:15.253844   66641 system_pods.go:74] duration metric: took 8.537438ms to wait for pod list to return data ...
	I0723 15:21:15.253853   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:15.258127   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:15.258153   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:15.258163   66641 node_conditions.go:105] duration metric: took 4.305171ms to run NodePressure ...
	I0723 15:21:15.258177   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:15.533298   66641 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541967   66641 kubeadm.go:739] kubelet initialised
	I0723 15:21:15.541987   66641 kubeadm.go:740] duration metric: took 8.645977ms waiting for restarted kubelet to initialise ...
	I0723 15:21:15.541995   66641 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:15.549557   66641 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.553971   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554002   66641 pod_ready.go:81] duration metric: took 4.418498ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.554013   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.554022   66641 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.558017   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558040   66641 pod_ready.go:81] duration metric: took 4.009013ms for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.558050   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.558058   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.562197   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562219   66641 pod_ready.go:81] duration metric: took 4.154836ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.562228   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.562234   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:15.649441   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649466   66641 pod_ready.go:81] duration metric: took 87.224782ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:15.649477   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:15.649484   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.049016   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049052   66641 pod_ready.go:81] duration metric: took 399.56194ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.049063   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-proxy-d4zwd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.049071   66641 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.449193   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449221   66641 pod_ready.go:81] duration metric: took 400.140989ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.449231   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.449239   66641 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:16.849035   66641 pod_ready.go:97] node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849069   66641 pod_ready.go:81] duration metric: took 399.822211ms for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:16.849080   66641 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-911217" hosting pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:16.849087   66641 pod_ready.go:38] duration metric: took 1.307085242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:16.849102   66641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:16.860322   66641 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:16.860344   66641 kubeadm.go:597] duration metric: took 9.037689802s to restartPrimaryControlPlane
	I0723 15:21:16.860353   66641 kubeadm.go:394] duration metric: took 9.086749188s to StartCluster
	I0723 15:21:16.860368   66641 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.860445   66641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:16.862706   66641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:16.863010   66641 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.64 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:16.863105   66641 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:16.863162   66641 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863183   66641 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863194   66641 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863201   66641 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:16.863202   66641 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-911217"
	I0723 15:21:16.863218   66641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-911217"
	I0723 15:21:16.863225   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863235   66641 config.go:182] Loaded profile config "default-k8s-diff-port-911217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:21:16.863261   66641 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.863272   66641 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:16.863304   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.863517   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863547   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863553   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863566   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.863584   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.863612   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.864773   66641 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:16.866155   66641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:16.879697   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0723 15:21:16.880186   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.880765   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.880786   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.881122   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.881681   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.881712   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.882675   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0723 15:21:16.883162   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.883709   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.883730   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.883748   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0723 15:21:16.884082   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.884138   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.884609   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.884639   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.884610   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.884699   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.885040   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.885254   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.888611   66641 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-911217"
	W0723 15:21:16.888627   66641 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:16.888651   66641 host.go:66] Checking if "default-k8s-diff-port-911217" exists ...
	I0723 15:21:16.888916   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.888944   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.899013   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0723 15:21:16.899458   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.900188   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.900208   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.900593   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.900786   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.902589   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0723 15:21:16.903091   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.903189   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.904095   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.904118   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.904576   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.904810   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.905242   66641 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:16.905443   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0723 15:21:16.905849   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.906358   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.906375   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.906491   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:16.906512   66641 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:16.906533   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.906766   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.906920   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.907374   66641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:16.907409   66641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:16.909637   66641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:16.910635   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911126   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.911154   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.911331   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.911534   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.911683   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.911859   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.913408   66641 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:16.913435   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:16.913456   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.916884   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.917338   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.917647   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.917896   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.918061   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.918207   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:16.930880   66641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0723 15:21:16.931386   66641 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:16.931925   66641 main.go:141] libmachine: Using API Version  1
	I0723 15:21:16.931951   66641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:16.932292   66641 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:16.932495   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetState
	I0723 15:21:16.934404   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .DriverName
	I0723 15:21:16.934645   66641 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:16.934659   66641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:16.934675   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHHostname
	I0723 15:21:16.937624   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.937991   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:3f:f3", ip: ""} in network mk-default-k8s-diff-port-911217: {Iface:virbr3 ExpiryTime:2024-07-23 16:20:53 +0000 UTC Type:0 Mac:52:54:00:78:3f:f3 Iaid: IPaddr:192.168.61.64 Prefix:24 Hostname:default-k8s-diff-port-911217 Clientid:01:52:54:00:78:3f:f3}
	I0723 15:21:16.938013   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | domain default-k8s-diff-port-911217 has defined IP address 192.168.61.64 and MAC address 52:54:00:78:3f:f3 in network mk-default-k8s-diff-port-911217
	I0723 15:21:16.938166   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHPort
	I0723 15:21:16.938342   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHKeyPath
	I0723 15:21:16.938523   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .GetSSHUsername
	I0723 15:21:16.938695   66641 sshutil.go:53] new ssh client: &{IP:192.168.61.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/default-k8s-diff-port-911217/id_rsa Username:docker}
	I0723 15:21:13.407459   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:15.906352   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:17.068411   66641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:17.084266   66641 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:17.189089   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:17.189118   66641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:17.205584   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:17.205623   66641 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:17.209103   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:17.224264   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:17.245125   66641 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:17.245152   66641 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:17.272564   66641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:18.245078   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020778604s)
	I0723 15:21:18.245165   66641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036025141s)
	I0723 15:21:18.245186   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245195   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245209   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245213   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245201   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245315   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245513   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245526   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245543   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245550   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245633   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245648   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245657   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245665   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245682   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245695   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245703   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.245723   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.245842   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245872   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245903   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245911   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.245928   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.245932   66641 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-911217"
	I0723 15:21:18.245982   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) DBG | Closing plugin on server side
	I0723 15:21:18.245987   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.246004   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.251643   66641 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:18.251660   66641 main.go:141] libmachine: (default-k8s-diff-port-911217) Calling .Close
	I0723 15:21:18.251879   66641 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:18.251889   66641 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:18.253737   66641 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:19.235665   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236110   64842 main.go:141] libmachine: (no-preload-543029) Found IP for machine: 192.168.72.227
	I0723 15:21:19.236141   64842 main.go:141] libmachine: (no-preload-543029) Reserving static IP address...
	I0723 15:21:19.236154   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has current primary IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.236541   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.236571   64842 main.go:141] libmachine: (no-preload-543029) DBG | skip adding static IP to network mk-no-preload-543029 - found existing host DHCP lease matching {name: "no-preload-543029", mac: "52:54:00:6f:c7:b7", ip: "192.168.72.227"}
	I0723 15:21:19.236586   64842 main.go:141] libmachine: (no-preload-543029) Reserved static IP address: 192.168.72.227
	I0723 15:21:19.236601   64842 main.go:141] libmachine: (no-preload-543029) Waiting for SSH to be available...
	I0723 15:21:19.236613   64842 main.go:141] libmachine: (no-preload-543029) DBG | Getting to WaitForSSH function...
	I0723 15:21:19.239149   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239453   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.239481   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.239620   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH client type: external
	I0723 15:21:19.239651   64842 main.go:141] libmachine: (no-preload-543029) DBG | Using SSH private key: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa (-rw-------)
	I0723 15:21:19.239677   64842 main.go:141] libmachine: (no-preload-543029) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0723 15:21:19.239691   64842 main.go:141] libmachine: (no-preload-543029) DBG | About to run SSH command:
	I0723 15:21:19.239700   64842 main.go:141] libmachine: (no-preload-543029) DBG | exit 0
	I0723 15:21:19.366227   64842 main.go:141] libmachine: (no-preload-543029) DBG | SSH cmd err, output: <nil>: 
	I0723 15:21:19.366646   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetConfigRaw
	I0723 15:21:19.367309   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.370038   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370401   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.370430   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.370756   64842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/config.json ...
	I0723 15:21:19.370949   64842 machine.go:94] provisionDockerMachine start ...
	I0723 15:21:19.370966   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:19.371186   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.373506   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.373912   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.373977   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.374053   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.374259   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374465   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.374635   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.374805   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.374996   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.375009   64842 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:21:19.482523   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0723 15:21:19.482551   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482771   64842 buildroot.go:166] provisioning hostname "no-preload-543029"
	I0723 15:21:19.482796   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.482975   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.485520   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.485868   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.485898   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.486084   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.486300   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486483   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.486634   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.486777   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.486998   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.487019   64842 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-543029 && echo "no-preload-543029" | sudo tee /etc/hostname
	I0723 15:21:19.609064   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-543029
	
	I0723 15:21:19.609100   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.611746   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612087   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.612133   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.612276   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:19.612477   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612663   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:19.612845   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:19.612979   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:19.613158   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:19.613180   64842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-543029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-543029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-543029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:21:19.731696   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:21:19.731721   64842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19319-11303/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-11303/.minikube}
	I0723 15:21:19.731740   64842 buildroot.go:174] setting up certificates
	I0723 15:21:19.731748   64842 provision.go:84] configureAuth start
	I0723 15:21:19.731755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetMachineName
	I0723 15:21:19.732051   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:19.735016   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.735425   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.735608   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:19.737908   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738267   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:19.738317   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:19.738482   64842 provision.go:143] copyHostCerts
	I0723 15:21:19.738556   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem, removing ...
	I0723 15:21:19.738571   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem
	I0723 15:21:19.738641   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/ca.pem (1078 bytes)
	I0723 15:21:19.738746   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem, removing ...
	I0723 15:21:19.738755   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem
	I0723 15:21:19.738779   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/cert.pem (1123 bytes)
	I0723 15:21:19.738852   64842 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem, removing ...
	I0723 15:21:19.738866   64842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem
	I0723 15:21:19.738887   64842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-11303/.minikube/key.pem (1675 bytes)
	I0723 15:21:19.738965   64842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem org=jenkins.no-preload-543029 san=[127.0.0.1 192.168.72.227 localhost minikube no-preload-543029]
	I0723 15:21:20.020845   64842 provision.go:177] copyRemoteCerts
	I0723 15:21:20.020921   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:21:20.020954   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.023907   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024341   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.024363   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.024531   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.024799   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.024973   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.025138   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.113238   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0723 15:21:20.136690   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0723 15:21:20.161178   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:21:20.184741   64842 provision.go:87] duration metric: took 452.982716ms to configureAuth
	I0723 15:21:20.184767   64842 buildroot.go:189] setting minikube options for container-runtime
	I0723 15:21:20.184992   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:20.185076   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.187893   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188209   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.188235   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.188473   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.188684   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.188883   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.189026   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.189181   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.189379   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.189397   64842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:21:17.263163   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:17.762332   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.263184   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.762413   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.263050   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:19.762396   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.263052   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:20.763027   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.263244   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:21.762584   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:18.255042   66641 addons.go:510] duration metric: took 1.391938603s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:19.089229   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:21.587960   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:20.463609   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:21:20.463657   64842 machine.go:97] duration metric: took 1.092694849s to provisionDockerMachine
	I0723 15:21:20.463670   64842 start.go:293] postStartSetup for "no-preload-543029" (driver="kvm2")
	I0723 15:21:20.463684   64842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:21:20.463705   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.464063   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:21:20.464093   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.467027   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467399   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.467429   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.467606   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.467785   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.467938   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.468096   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.556442   64842 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:21:20.561477   64842 info.go:137] Remote host: Buildroot 2023.02.9
	I0723 15:21:20.561506   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/addons for local assets ...
	I0723 15:21:20.561590   64842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-11303/.minikube/files for local assets ...
	I0723 15:21:20.561694   64842 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem -> 185032.pem in /etc/ssl/certs
	I0723 15:21:20.561814   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:21:20.574431   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:20.603531   64842 start.go:296] duration metric: took 139.847057ms for postStartSetup
	I0723 15:21:20.603578   64842 fix.go:56] duration metric: took 18.836315993s for fixHost
	I0723 15:21:20.603644   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.606820   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607184   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.607230   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.607410   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.607660   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607851   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.607999   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.608191   64842 main.go:141] libmachine: Using SSH client type: native
	I0723 15:21:20.608373   64842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.227 22 <nil> <nil>}
	I0723 15:21:20.608383   64842 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0723 15:21:20.718722   64842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721748080.694505305
	
	I0723 15:21:20.718755   64842 fix.go:216] guest clock: 1721748080.694505305
	I0723 15:21:20.718764   64842 fix.go:229] Guest: 2024-07-23 15:21:20.694505305 +0000 UTC Remote: 2024-07-23 15:21:20.603582679 +0000 UTC m=+365.240688683 (delta=90.922626ms)
	I0723 15:21:20.718796   64842 fix.go:200] guest clock delta is within tolerance: 90.922626ms
	I0723 15:21:20.718801   64842 start.go:83] releasing machines lock for "no-preload-543029", held for 18.9515773s
	I0723 15:21:20.718818   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.719088   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:20.721851   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722269   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.722292   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.722527   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723046   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723231   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:20.723328   64842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:21:20.723377   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.723460   64842 ssh_runner.go:195] Run: cat /version.json
	I0723 15:21:20.723485   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:20.726596   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.726987   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727022   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727041   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727142   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727329   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.727475   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:20.727498   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.727510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:20.727638   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:20.727707   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.728003   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:20.728170   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:20.728341   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:20.841462   64842 ssh_runner.go:195] Run: systemctl --version
	I0723 15:21:20.847787   64842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:21:20.998310   64842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0723 15:21:21.004048   64842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0723 15:21:21.004125   64842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:21:21.019676   64842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0723 15:21:21.019699   64842 start.go:495] detecting cgroup driver to use...
	I0723 15:21:21.019773   64842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:21:21.034888   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:21:21.049886   64842 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:21:21.049949   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:21:21.063974   64842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:21:21.077306   64842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:21:21.195936   64842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:21:21.355002   64842 docker.go:233] disabling docker service ...
	I0723 15:21:21.355090   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:21:21.370421   64842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:21:21.382910   64842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:21:21.493040   64842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:21:21.610670   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:21:21.623845   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:21:21.641461   64842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0723 15:21:21.641518   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.651025   64842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:21:21.651096   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.661449   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.671431   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.681681   64842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:21:21.692696   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.702592   64842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.720041   64842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:21:21.730075   64842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:21:21.739621   64842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0723 15:21:21.739686   64842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0723 15:21:21.752036   64842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:21:21.761412   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:21.902842   64842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:21:22.032458   64842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:21:22.032545   64842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:21:22.037229   64842 start.go:563] Will wait 60s for crictl version
	I0723 15:21:22.037309   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.040918   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:21:22.081102   64842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0723 15:21:22.081203   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.111862   64842 ssh_runner.go:195] Run: crio --version
	I0723 15:21:22.140842   64842 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0723 15:21:18.404301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:20.406322   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.406365   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:22.142110   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetIP
	I0723 15:21:22.144996   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145342   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:22.145382   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:22.145651   64842 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0723 15:21:22.149630   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:22.161308   64842 kubeadm.go:883] updating cluster {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:21:22.161457   64842 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 15:21:22.161507   64842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:21:22.196099   64842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0723 15:21:22.196122   64842 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0723 15:21:22.196180   64842 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.196197   64842 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.196257   64842 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0723 15:21:22.196270   64842 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.196280   64842 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.196391   64842 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.196430   64842 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.196256   64842 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.197600   64842 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.197611   64842 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.197612   64842 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.197603   64842 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.197632   64842 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.197593   64842 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.197855   64842 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0723 15:21:22.453013   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.456128   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.457426   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.457660   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.468840   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.488855   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0723 15:21:22.498800   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.521182   64842 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0723 15:21:22.521236   64842 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.521282   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.606761   64842 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0723 15:21:22.606814   64842 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.606863   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626104   64842 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0723 15:21:22.626139   64842 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0723 15:21:22.626148   64842 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.626171   64842 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626210   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.626405   64842 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0723 15:21:22.626436   64842 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.626497   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.739834   64842 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0723 15:21:22.739888   64842 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.739923   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0723 15:21:22.739972   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0723 15:21:22.739931   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:22.740025   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0723 15:21:22.740028   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0723 15:21:22.740087   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0723 15:21:22.754758   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0723 15:21:22.903466   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0723 15:21:22.903526   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903582   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.903618   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:22.903475   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903669   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903725   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:22.903738   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:22.903808   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0723 15:21:22.903870   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:22.903977   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.904112   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:22.916856   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0723 15:21:22.916880   64842 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.916927   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0723 15:21:22.917993   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918778   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918818   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0723 15:21:22.918846   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0723 15:21:22.918919   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0723 15:21:23.126109   64842 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916361   64842 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.790200633s)
	I0723 15:21:24.916416   64842 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0723 15:21:24.916450   64842 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:24.916477   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.999519999s)
	I0723 15:21:24.916501   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:21:24.916502   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0723 15:21:24.916528   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.916570   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0723 15:21:24.921489   64842 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:22.262373   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:22.762746   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.263229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:23.763195   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.262446   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.762506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.262490   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:25.762353   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.263073   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:26.762900   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:24.087763   66641 node_ready.go:53] node "default-k8s-diff-port-911217" has status "Ready":"False"
	I0723 15:21:24.588088   66641 node_ready.go:49] node "default-k8s-diff-port-911217" has status "Ready":"True"
	I0723 15:21:24.588115   66641 node_ready.go:38] duration metric: took 7.503814941s for node "default-k8s-diff-port-911217" to be "Ready" ...
	I0723 15:21:24.588126   66641 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:24.593658   66641 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598755   66641 pod_ready.go:92] pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:24.598780   66641 pod_ready.go:81] duration metric: took 5.095349ms for pod "coredns-7db6d8ff4d-9qcfs" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:24.598792   66641 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:26.605401   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:24.906330   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:26.906460   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:27.393601   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.477002958s)
	I0723 15:21:27.393621   64842 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.472105782s)
	I0723 15:21:27.393640   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0723 15:21:27.393664   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393665   64842 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0723 15:21:27.393707   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0723 15:21:27.393763   64842 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:29.040178   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.646445558s)
	I0723 15:21:29.040216   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0723 15:21:29.040222   64842 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.64643284s)
	I0723 15:21:29.040248   64842 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0723 15:21:29.040252   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:29.040316   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0723 15:21:27.262530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:27.762666   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.262506   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.762908   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.262943   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:29.763041   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.263200   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:30.762855   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.262991   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:31.763215   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:28.605685   66641 pod_ready.go:102] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:29.107082   66641 pod_ready.go:92] pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.107106   66641 pod_ready.go:81] duration metric: took 4.508306433s for pod "etcd-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.107117   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112506   66641 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.112529   66641 pod_ready.go:81] duration metric: took 5.405596ms for pod "kube-apiserver-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.112564   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117710   66641 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.117736   66641 pod_ready.go:81] duration metric: took 5.161856ms for pod "kube-controller-manager-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.117748   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122182   66641 pod_ready.go:92] pod "kube-proxy-d4zwd" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.122207   66641 pod_ready.go:81] duration metric: took 4.450531ms for pod "kube-proxy-d4zwd" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.122218   66641 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126407   66641 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:29.126428   66641 pod_ready.go:81] duration metric: took 4.201792ms for pod "kube-scheduler-default-k8s-diff-port-911217" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:29.126439   66641 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:31.133392   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:28.967873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.404672   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:31.100302   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.059957757s)
	I0723 15:21:31.100343   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0723 15:21:31.100373   64842 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:31.100425   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0723 15:21:34.291526   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.191073801s)
	I0723 15:21:34.291561   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0723 15:21:34.291588   64842 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:34.291639   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0723 15:21:32.262345   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:32.762530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.262472   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.763055   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.262344   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:34.762962   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.262594   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:35.762498   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.263210   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:36.763229   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:33.631906   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.632672   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:33.405404   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.906326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:35.650341   64842 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358679252s)
	I0723 15:21:35.650368   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0723 15:21:35.650412   64842 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:35.650450   64842 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0723 15:21:36.307948   64842 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19319-11303/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0723 15:21:36.307992   64842 cache_images.go:123] Successfully loaded all cached images
	I0723 15:21:36.307999   64842 cache_images.go:92] duration metric: took 14.11186471s to LoadCachedImages
	I0723 15:21:36.308012   64842 kubeadm.go:934] updating node { 192.168.72.227 8443 v1.31.0-beta.0 crio true true} ...
	I0723 15:21:36.308139   64842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-543029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:21:36.308223   64842 ssh_runner.go:195] Run: crio config
	I0723 15:21:36.353489   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:36.353510   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:36.353521   64842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:21:36.353549   64842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-543029 NodeName:no-preload-543029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:21:36.353706   64842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-543029"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:21:36.353774   64842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0723 15:21:36.363814   64842 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:21:36.363887   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:21:36.372484   64842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0723 15:21:36.388450   64842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0723 15:21:36.404404   64842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0723 15:21:36.420801   64842 ssh_runner.go:195] Run: grep 192.168.72.227	control-plane.minikube.internal$ /etc/hosts
	I0723 15:21:36.424596   64842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 15:21:36.436558   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:36.563903   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:36.580045   64842 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029 for IP: 192.168.72.227
	I0723 15:21:36.580108   64842 certs.go:194] generating shared ca certs ...
	I0723 15:21:36.580133   64842 certs.go:226] acquiring lock for ca certs: {Name:mkc9951d8e001787fba4648f53fcd0a765dde2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:36.580339   64842 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key
	I0723 15:21:36.580409   64842 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key
	I0723 15:21:36.580423   64842 certs.go:256] generating profile certs ...
	I0723 15:21:36.580538   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.key
	I0723 15:21:36.580633   64842 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key.1fcf66d2
	I0723 15:21:36.580678   64842 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key
	I0723 15:21:36.580818   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem (1338 bytes)
	W0723 15:21:36.580856   64842 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503_empty.pem, impossibly tiny 0 bytes
	I0723 15:21:36.580866   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:21:36.580899   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/ca.pem (1078 bytes)
	I0723 15:21:36.580934   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:21:36.580968   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/certs/key.pem (1675 bytes)
	I0723 15:21:36.581017   64842 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem (1708 bytes)
	I0723 15:21:36.581890   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:21:36.617903   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0723 15:21:36.650101   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:21:36.690040   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0723 15:21:36.716216   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0723 15:21:36.740583   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 15:21:36.764801   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:21:36.798418   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0723 15:21:36.821594   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/ssl/certs/185032.pem --> /usr/share/ca-certificates/185032.pem (1708 bytes)
	I0723 15:21:36.843862   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:21:36.866577   64842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-11303/.minikube/certs/18503.pem --> /usr/share/ca-certificates/18503.pem (1338 bytes)
	I0723 15:21:36.888178   64842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:21:36.903980   64842 ssh_runner.go:195] Run: openssl version
	I0723 15:21:36.910344   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/185032.pem && ln -fs /usr/share/ca-certificates/185032.pem /etc/ssl/certs/185032.pem"
	I0723 15:21:36.920792   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925317   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:09 /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.925372   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/185032.pem
	I0723 15:21:36.931375   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/185032.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:21:36.941782   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:21:36.952943   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957594   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.957643   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:21:36.963465   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:21:36.974471   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18503.pem && ln -fs /usr/share/ca-certificates/18503.pem /etc/ssl/certs/18503.pem"
	I0723 15:21:36.984631   64842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989126   64842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:09 /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.989180   64842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18503.pem
	I0723 15:21:36.994580   64842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18503.pem /etc/ssl/certs/51391683.0"
	I0723 15:21:37.004372   64842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:21:37.009492   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:21:37.016189   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:21:37.023648   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:21:37.030369   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:21:37.036358   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:21:37.042504   64842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:21:37.048396   64842 kubeadm.go:392] StartCluster: {Name:no-preload-543029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-543029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:21:37.048473   64842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:21:37.048542   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.085642   64842 cri.go:89] found id: ""
	I0723 15:21:37.085711   64842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 15:21:37.095789   64842 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0723 15:21:37.095809   64842 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0723 15:21:37.095861   64842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0723 15:21:37.105817   64842 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0723 15:21:37.106841   64842 kubeconfig.go:125] found "no-preload-543029" server: "https://192.168.72.227:8443"
	I0723 15:21:37.109115   64842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0723 15:21:37.118333   64842 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.227
	I0723 15:21:37.118365   64842 kubeadm.go:1160] stopping kube-system containers ...
	I0723 15:21:37.118389   64842 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0723 15:21:37.118442   64842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:21:37.160393   64842 cri.go:89] found id: ""
	I0723 15:21:37.160465   64842 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0723 15:21:37.175866   64842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:21:37.184719   64842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:21:37.184737   64842 kubeadm.go:157] found existing configuration files:
	
	I0723 15:21:37.184796   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:21:37.192836   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:21:37.192893   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:21:37.201472   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:21:37.209448   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:21:37.209509   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:21:37.217692   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.225746   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:21:37.225792   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:21:37.234312   64842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:21:37.242796   64842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:21:37.242853   64842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:21:37.251655   64842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:21:37.260393   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:37.372906   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.228191   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.438949   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.503088   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:38.588692   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:21:38.588787   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.089205   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.589266   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.609653   64842 api_server.go:72] duration metric: took 1.020961559s to wait for apiserver process to appear ...
	I0723 15:21:39.609681   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:21:39.609703   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:39.610233   64842 api_server.go:269] stopped: https://192.168.72.227:8443/healthz: Get "https://192.168.72.227:8443/healthz": dial tcp 192.168.72.227:8443: connect: connection refused
	I0723 15:21:40.110036   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:37.263268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:37.763001   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.263263   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.762567   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.262510   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:39.762366   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.263091   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:40.762546   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.263115   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:41.762511   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:38.133459   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.634011   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:38.405042   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:40.405301   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.406499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:42.755036   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.755081   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:42.755102   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:42.774722   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0723 15:21:42.774753   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0723 15:21:43.110105   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.114521   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.114549   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:43.610681   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:43.619976   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0723 15:21:43.620012   64842 api_server.go:103] status: https://192.168.72.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0723 15:21:44.110574   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:21:44.117164   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:21:44.125459   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:21:44.125487   64842 api_server.go:131] duration metric: took 4.515798224s to wait for apiserver health ...
	I0723 15:21:44.125500   64842 cni.go:84] Creating CNI manager for ""
	I0723 15:21:44.125508   64842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:21:44.127031   64842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:21:44.128250   64842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:21:44.156441   64842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:21:44.190002   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:21:44.202487   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:21:44.202543   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0723 15:21:44.202558   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0723 15:21:44.202570   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0723 15:21:44.202580   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0723 15:21:44.202597   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0723 15:21:44.202611   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0723 15:21:44.202623   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:21:44.202635   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0723 15:21:44.202649   64842 system_pods.go:74] duration metric: took 12.618106ms to wait for pod list to return data ...
	I0723 15:21:44.202663   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:21:44.208561   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:21:44.208598   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:21:44.208613   64842 node_conditions.go:105] duration metric: took 5.939597ms to run NodePressure ...
	I0723 15:21:44.208637   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0723 15:21:44.527115   64842 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531381   64842 kubeadm.go:739] kubelet initialised
	I0723 15:21:44.531403   64842 kubeadm.go:740] duration metric: took 4.261609ms waiting for restarted kubelet to initialise ...
	I0723 15:21:44.531410   64842 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:44.536741   64842 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.542345   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542367   64842 pod_ready.go:81] duration metric: took 5.603228ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.542376   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.542409   64842 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.547170   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547202   64842 pod_ready.go:81] duration metric: took 4.783034ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.547214   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "etcd-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.547223   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.552220   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552239   64842 pod_ready.go:81] duration metric: took 5.010275ms for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.552247   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-apiserver-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.552252   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.593233   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593263   64842 pod_ready.go:81] duration metric: took 41.002989ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.593275   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.593284   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:44.993527   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993556   64842 pod_ready.go:81] duration metric: took 400.24962ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:44.993567   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-proxy-wzbps" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:44.993575   64842 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.393187   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393215   64842 pod_ready.go:81] duration metric: took 399.632229ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.393224   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "kube-scheduler-no-preload-543029" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.393230   64842 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:45.794005   64842 pod_ready.go:97] node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794039   64842 pod_ready.go:81] duration metric: took 400.798877ms for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:21:45.794050   64842 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-543029" hosting pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:45.794061   64842 pod_ready.go:38] duration metric: took 1.262643249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:45.794082   64842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:21:45.806575   64842 ops.go:34] apiserver oom_adj: -16
	I0723 15:21:45.806604   64842 kubeadm.go:597] duration metric: took 8.710787698s to restartPrimaryControlPlane
	I0723 15:21:45.806616   64842 kubeadm.go:394] duration metric: took 8.758224212s to StartCluster
	I0723 15:21:45.806636   64842 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.806714   64842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:21:45.808707   64842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:21:45.808950   64842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:21:45.809024   64842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:21:45.809108   64842 addons.go:69] Setting storage-provisioner=true in profile "no-preload-543029"
	I0723 15:21:45.809121   64842 config.go:182] Loaded profile config "no-preload-543029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:21:45.809144   64842 addons.go:234] Setting addon storage-provisioner=true in "no-preload-543029"
	I0723 15:21:45.809148   64842 addons.go:69] Setting default-storageclass=true in profile "no-preload-543029"
	I0723 15:21:45.809158   64842 addons.go:69] Setting metrics-server=true in profile "no-preload-543029"
	I0723 15:21:45.809186   64842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-543029"
	I0723 15:21:45.809198   64842 addons.go:234] Setting addon metrics-server=true in "no-preload-543029"
	W0723 15:21:45.809207   64842 addons.go:243] addon metrics-server should already be in state true
	I0723 15:21:45.809233   64842 host.go:66] Checking if "no-preload-543029" exists ...
	W0723 15:21:45.809156   64842 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:21:45.809298   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.809533   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809566   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809615   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809650   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.809666   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.809694   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.810889   64842 out.go:177] * Verifying Kubernetes components...
	I0723 15:21:45.812166   64842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:21:45.825877   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0723 15:21:45.826459   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.826873   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0723 15:21:45.827091   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827122   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.827302   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.827520   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.827785   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.827809   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.828045   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.828076   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.828197   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.828404   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.828464   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0723 15:21:45.829160   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.829594   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.829617   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.830024   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.830679   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.830726   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.832633   64842 addons.go:234] Setting addon default-storageclass=true in "no-preload-543029"
	W0723 15:21:45.832654   64842 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:21:45.832683   64842 host.go:66] Checking if "no-preload-543029" exists ...
	I0723 15:21:45.833024   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.833067   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.848944   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0723 15:21:45.849974   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.850455   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0723 15:21:45.850916   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.850938   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.851135   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.851254   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.851443   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.852354   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.852373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.852472   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0723 15:21:45.852797   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.853534   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.853613   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.853820   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.854337   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.854373   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.854866   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.855572   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.855606   64842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:21:45.855642   64842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:21:45.855829   64842 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:21:45.857645   64842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:21:45.857658   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:21:45.857676   64842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:21:45.857695   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:42.262868   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:42.762469   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.262898   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.762342   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.262359   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:44.763149   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.263062   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:45.763109   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.262592   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:46.763170   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:43.132245   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.633648   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:45.859112   64842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:45.859130   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:21:45.859146   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.861510   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862069   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.862099   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.862362   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.862596   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.862842   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.863077   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.863162   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.864192   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.864223   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.864257   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.864446   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.864602   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.864750   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:45.901172   64842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0723 15:21:45.901604   64842 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:21:45.902073   64842 main.go:141] libmachine: Using API Version  1
	I0723 15:21:45.902096   64842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:21:45.902455   64842 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:21:45.902711   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetState
	I0723 15:21:45.904749   64842 main.go:141] libmachine: (no-preload-543029) Calling .DriverName
	I0723 15:21:45.905713   64842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:45.905736   64842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:21:45.905755   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHHostname
	I0723 15:21:45.909130   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909598   64842 main.go:141] libmachine: (no-preload-543029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:c7:b7", ip: ""} in network mk-no-preload-543029: {Iface:virbr2 ExpiryTime:2024-07-23 16:21:12 +0000 UTC Type:0 Mac:52:54:00:6f:c7:b7 Iaid: IPaddr:192.168.72.227 Prefix:24 Hostname:no-preload-543029 Clientid:01:52:54:00:6f:c7:b7}
	I0723 15:21:45.909655   64842 main.go:141] libmachine: (no-preload-543029) DBG | domain no-preload-543029 has defined IP address 192.168.72.227 and MAC address 52:54:00:6f:c7:b7 in network mk-no-preload-543029
	I0723 15:21:45.909882   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHPort
	I0723 15:21:45.910025   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHKeyPath
	I0723 15:21:45.910171   64842 main.go:141] libmachine: (no-preload-543029) Calling .GetSSHUsername
	I0723 15:21:45.910413   64842 sshutil.go:53] new ssh client: &{IP:192.168.72.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/no-preload-543029/id_rsa Username:docker}
	I0723 15:21:46.014049   64842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:21:46.040760   64842 node_ready.go:35] waiting up to 6m0s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:46.115180   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:21:46.144610   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:21:46.144632   64842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:21:46.164354   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:21:46.181905   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:21:46.181929   64842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:21:46.241734   64842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:46.241764   64842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:21:46.267086   64842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:21:47.396441   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.281225615s)
	I0723 15:21:47.396460   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232072139s)
	I0723 15:21:47.396498   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396512   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396529   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396544   64842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129426841s)
	I0723 15:21:47.396591   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396611   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396879   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396894   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396904   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396912   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.396927   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.396948   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.396958   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.396973   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.397067   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397093   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.397113   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397120   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397310   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.397326   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.397335   64842 addons.go:475] Verifying addon metrics-server=true in "no-preload-543029"
	I0723 15:21:47.398473   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398488   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.398497   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.398504   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.398766   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.398788   64842 main.go:141] libmachine: (no-preload-543029) DBG | Closing plugin on server side
	I0723 15:21:47.398805   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.420728   64842 main.go:141] libmachine: Making call to close driver server
	I0723 15:21:47.420747   64842 main.go:141] libmachine: (no-preload-543029) Calling .Close
	I0723 15:21:47.421047   64842 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:21:47.421067   64842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:21:47.423038   64842 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0723 15:21:44.409201   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:46.905099   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:47.424285   64842 addons.go:510] duration metric: took 1.615264126s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0723 15:21:48.044800   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:47.262743   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:47.762500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.262636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:48.762397   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.262912   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:49.763274   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.262631   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:50.762560   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.262984   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:51.763131   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:51.763218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:51.804139   65605 cri.go:89] found id: ""
	I0723 15:21:51.804167   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.804177   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:51.804185   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:51.804246   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:51.846025   65605 cri.go:89] found id: ""
	I0723 15:21:51.846052   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.846064   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:51.846070   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:51.846133   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:48.132371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.133097   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:49.405318   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.907543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:50.545198   64842 node_ready.go:53] node "no-preload-543029" has status "Ready":"False"
	I0723 15:21:53.045065   64842 node_ready.go:49] node "no-preload-543029" has status "Ready":"True"
	I0723 15:21:53.045092   64842 node_ready.go:38] duration metric: took 7.004300565s for node "no-preload-543029" to be "Ready" ...
	I0723 15:21:53.045103   64842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:21:53.051631   64842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056333   64842 pod_ready.go:92] pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.056391   64842 pod_ready.go:81] duration metric: took 4.723453ms for pod "coredns-5cfdc65f69-v2bhl" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.056428   64842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061634   64842 pod_ready.go:92] pod "etcd-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:53.061654   64842 pod_ready.go:81] duration metric: took 5.217288ms for pod "etcd-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:53.061666   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:55.068882   64842 pod_ready.go:102] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:51.885398   65605 cri.go:89] found id: ""
	I0723 15:21:51.885431   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.885442   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:51.885450   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:51.885514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:51.919587   65605 cri.go:89] found id: ""
	I0723 15:21:51.919618   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.919630   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:51.919637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:51.919723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:51.955301   65605 cri.go:89] found id: ""
	I0723 15:21:51.955335   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.955342   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:51.955348   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:51.955397   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:51.988318   65605 cri.go:89] found id: ""
	I0723 15:21:51.988345   65605 logs.go:276] 0 containers: []
	W0723 15:21:51.988355   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:51.988362   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:51.988419   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:52.023375   65605 cri.go:89] found id: ""
	I0723 15:21:52.023407   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.023418   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:52.023426   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:52.023498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:52.060183   65605 cri.go:89] found id: ""
	I0723 15:21:52.060205   65605 logs.go:276] 0 containers: []
	W0723 15:21:52.060212   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:52.060221   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:52.060233   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:52.109904   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:52.109937   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:52.123292   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:52.123317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:52.253361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:52.253386   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:52.253401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:52.321684   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:52.321720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:54.859846   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:54.873167   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:54.873233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:54.909330   65605 cri.go:89] found id: ""
	I0723 15:21:54.909351   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.909359   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:54.909364   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:54.909412   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:54.943092   65605 cri.go:89] found id: ""
	I0723 15:21:54.943120   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.943131   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:54.943138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:54.943198   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:54.975051   65605 cri.go:89] found id: ""
	I0723 15:21:54.975080   65605 logs.go:276] 0 containers: []
	W0723 15:21:54.975090   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:54.975098   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:54.975172   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:55.017552   65605 cri.go:89] found id: ""
	I0723 15:21:55.017580   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.017590   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:55.017596   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:55.017657   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:55.067857   65605 cri.go:89] found id: ""
	I0723 15:21:55.067887   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.067897   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:55.067903   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:55.067965   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:55.105194   65605 cri.go:89] found id: ""
	I0723 15:21:55.105224   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.105234   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:55.105242   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:55.105312   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:55.174421   65605 cri.go:89] found id: ""
	I0723 15:21:55.174451   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.174463   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:55.174470   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:55.174521   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:55.209007   65605 cri.go:89] found id: ""
	I0723 15:21:55.209032   65605 logs.go:276] 0 containers: []
	W0723 15:21:55.209039   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:55.209048   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:55.209059   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:55.261075   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:55.261110   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:55.273629   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:55.273656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:55.348214   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:55.348237   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:55.348271   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:55.418341   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:55.418371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:21:52.134201   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.633089   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:54.405215   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.405377   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:56.068263   64842 pod_ready.go:92] pod "kube-apiserver-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.068285   64842 pod_ready.go:81] duration metric: took 3.006610636s for pod "kube-apiserver-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.068294   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073245   64842 pod_ready.go:92] pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.073267   64842 pod_ready.go:81] duration metric: took 4.962522ms for pod "kube-controller-manager-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.073275   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078816   64842 pod_ready.go:92] pod "kube-proxy-wzbps" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.078835   64842 pod_ready.go:81] duration metric: took 5.554703ms for pod "kube-proxy-wzbps" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.078843   64842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646678   64842 pod_ready.go:92] pod "kube-scheduler-no-preload-543029" in "kube-system" namespace has status "Ready":"True"
	I0723 15:21:56.646709   64842 pod_ready.go:81] duration metric: took 567.858812ms for pod "kube-scheduler-no-preload-543029" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:56.646722   64842 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	I0723 15:21:58.653962   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:57.956565   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:21:57.969980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:21:57.970054   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:21:58.002894   65605 cri.go:89] found id: ""
	I0723 15:21:58.002925   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.002943   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:21:58.002951   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:21:58.003018   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:21:58.034980   65605 cri.go:89] found id: ""
	I0723 15:21:58.035007   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.035017   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:21:58.035024   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:21:58.035090   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:21:58.068666   65605 cri.go:89] found id: ""
	I0723 15:21:58.068694   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.068702   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:21:58.068708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:21:58.068757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:21:58.102693   65605 cri.go:89] found id: ""
	I0723 15:21:58.102727   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.102737   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:21:58.102744   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:21:58.102807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:21:58.137492   65605 cri.go:89] found id: ""
	I0723 15:21:58.137521   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.137530   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:21:58.137535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:21:58.137590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:21:58.173616   65605 cri.go:89] found id: ""
	I0723 15:21:58.173640   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.173647   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:21:58.173654   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:21:58.173716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:21:58.206995   65605 cri.go:89] found id: ""
	I0723 15:21:58.207023   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.207033   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:21:58.207040   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:21:58.207100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:21:58.238476   65605 cri.go:89] found id: ""
	I0723 15:21:58.238504   65605 logs.go:276] 0 containers: []
	W0723 15:21:58.238513   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:21:58.238525   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:21:58.238538   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:21:58.291074   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:21:58.291104   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:21:58.305305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:21:58.305349   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:21:58.379551   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:58.379572   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:21:58.379587   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:21:58.453253   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:21:58.453293   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:00.994715   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:01.010264   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:01.010359   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:01.065402   65605 cri.go:89] found id: ""
	I0723 15:22:01.065433   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.065443   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:01.065451   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:01.065511   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:01.115626   65605 cri.go:89] found id: ""
	I0723 15:22:01.115655   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.115666   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:01.115675   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:01.115737   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:01.155568   65605 cri.go:89] found id: ""
	I0723 15:22:01.155595   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.155604   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:01.155610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:01.155674   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:01.191076   65605 cri.go:89] found id: ""
	I0723 15:22:01.191102   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.191110   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:01.191116   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:01.191162   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:01.224233   65605 cri.go:89] found id: ""
	I0723 15:22:01.224257   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.224263   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:01.224269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:01.224337   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:01.257321   65605 cri.go:89] found id: ""
	I0723 15:22:01.257344   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.257351   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:01.257357   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:01.257415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:01.289646   65605 cri.go:89] found id: ""
	I0723 15:22:01.289670   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.289678   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:01.289685   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:01.289740   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:01.322672   65605 cri.go:89] found id: ""
	I0723 15:22:01.322703   65605 logs.go:276] 0 containers: []
	W0723 15:22:01.322714   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:01.322725   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:01.322741   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:01.395637   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:01.395674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:01.434548   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:01.434580   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:01.484364   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:01.484396   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:01.497536   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:01.497571   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:01.567570   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:21:57.132119   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:59.132178   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.134156   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:21:58.407847   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:00.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:01.161116   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.658640   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:04.068561   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:04.082660   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:04.082738   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:04.118536   65605 cri.go:89] found id: ""
	I0723 15:22:04.118566   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.118576   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:04.118584   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:04.118642   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:04.154768   65605 cri.go:89] found id: ""
	I0723 15:22:04.154792   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.154802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:04.154809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:04.154854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:04.188426   65605 cri.go:89] found id: ""
	I0723 15:22:04.188456   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.188464   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:04.188469   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:04.188517   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:04.222195   65605 cri.go:89] found id: ""
	I0723 15:22:04.222221   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.222229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:04.222251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:04.222327   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:04.259164   65605 cri.go:89] found id: ""
	I0723 15:22:04.259191   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.259201   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:04.259208   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:04.259275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:04.291500   65605 cri.go:89] found id: ""
	I0723 15:22:04.291527   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.291534   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:04.291541   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:04.291595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:04.326680   65605 cri.go:89] found id: ""
	I0723 15:22:04.326712   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.326722   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:04.326729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:04.326789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:04.358629   65605 cri.go:89] found id: ""
	I0723 15:22:04.358653   65605 logs.go:276] 0 containers: []
	W0723 15:22:04.358662   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:04.358671   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:04.358682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:04.429591   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:04.429614   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:04.429625   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:04.509841   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:04.509887   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:04.547827   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:04.547852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:04.600857   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:04.600891   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:03.633501   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.633691   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:03.404413   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:05.404840   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.405499   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:06.153755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:08.653890   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:07.116541   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:07.129739   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:07.129809   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:07.164541   65605 cri.go:89] found id: ""
	I0723 15:22:07.164573   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.164583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:07.164589   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:07.164651   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:07.202567   65605 cri.go:89] found id: ""
	I0723 15:22:07.202595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.202606   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:07.202613   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:07.202672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:07.238665   65605 cri.go:89] found id: ""
	I0723 15:22:07.238689   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.238698   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:07.238706   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:07.238763   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:07.271216   65605 cri.go:89] found id: ""
	I0723 15:22:07.271246   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.271256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:07.271263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:07.271335   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:07.303566   65605 cri.go:89] found id: ""
	I0723 15:22:07.303595   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.303606   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:07.303613   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:07.303672   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:07.337927   65605 cri.go:89] found id: ""
	I0723 15:22:07.337951   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.337959   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:07.337965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:07.338023   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:07.373813   65605 cri.go:89] found id: ""
	I0723 15:22:07.373841   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.373852   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:07.373860   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:07.373928   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:07.408301   65605 cri.go:89] found id: ""
	I0723 15:22:07.408326   65605 logs.go:276] 0 containers: []
	W0723 15:22:07.408333   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:07.408340   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:07.408350   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:07.488384   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:07.488417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.531867   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:07.531895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:07.582639   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:07.582671   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:07.597387   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:07.597413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:07.673185   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.173915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:10.186657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:10.186717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:10.218213   65605 cri.go:89] found id: ""
	I0723 15:22:10.218238   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.218246   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:10.218252   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:10.218302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:10.250199   65605 cri.go:89] found id: ""
	I0723 15:22:10.250228   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.250238   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:10.250245   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:10.250307   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:10.282920   65605 cri.go:89] found id: ""
	I0723 15:22:10.282947   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.282957   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:10.282965   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:10.283022   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:10.317334   65605 cri.go:89] found id: ""
	I0723 15:22:10.317363   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.317372   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:10.317380   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:10.317443   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:10.350520   65605 cri.go:89] found id: ""
	I0723 15:22:10.350548   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.350559   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:10.350566   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:10.350630   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:10.381360   65605 cri.go:89] found id: ""
	I0723 15:22:10.381385   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.381392   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:10.381405   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:10.381451   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:10.413202   65605 cri.go:89] found id: ""
	I0723 15:22:10.413231   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.413239   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:10.413244   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:10.413300   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:10.447102   65605 cri.go:89] found id: ""
	I0723 15:22:10.447132   65605 logs.go:276] 0 containers: []
	W0723 15:22:10.447143   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:10.447154   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:10.447168   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:10.496110   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:10.496141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:10.509298   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:10.509331   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:10.578938   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:10.578960   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:10.578975   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:10.660316   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:10.660346   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:07.634852   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.635205   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:09.905326   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.906212   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:11.153941   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.652564   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:13.199119   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:13.212070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:13.212129   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:13.247646   65605 cri.go:89] found id: ""
	I0723 15:22:13.247683   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.247694   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:13.247701   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:13.247759   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:13.277875   65605 cri.go:89] found id: ""
	I0723 15:22:13.277901   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.277909   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:13.277918   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:13.277973   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:13.311499   65605 cri.go:89] found id: ""
	I0723 15:22:13.311520   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.311527   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:13.311533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:13.311587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:13.342913   65605 cri.go:89] found id: ""
	I0723 15:22:13.342944   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.342955   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:13.342963   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:13.343020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:13.380062   65605 cri.go:89] found id: ""
	I0723 15:22:13.380085   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.380092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:13.380097   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:13.380148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:13.416683   65605 cri.go:89] found id: ""
	I0723 15:22:13.416712   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.416721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:13.416728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:13.416786   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:13.451783   65605 cri.go:89] found id: ""
	I0723 15:22:13.451806   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.451813   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:13.451819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:13.451864   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:13.490456   65605 cri.go:89] found id: ""
	I0723 15:22:13.490488   65605 logs.go:276] 0 containers: []
	W0723 15:22:13.490500   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:13.490512   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:13.490531   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:13.562391   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:13.562419   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:13.562435   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:13.639271   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:13.639330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:13.677457   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:13.677486   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:13.727877   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:13.727912   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.242569   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:16.255165   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:16.255237   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:16.286884   65605 cri.go:89] found id: ""
	I0723 15:22:16.286973   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.286990   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:16.286998   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:16.287070   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:16.319480   65605 cri.go:89] found id: ""
	I0723 15:22:16.319508   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.319518   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:16.319524   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:16.319590   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:16.356142   65605 cri.go:89] found id: ""
	I0723 15:22:16.356176   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.356186   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:16.356193   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:16.356251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:16.393720   65605 cri.go:89] found id: ""
	I0723 15:22:16.393748   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.393756   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:16.393761   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:16.393817   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:16.429752   65605 cri.go:89] found id: ""
	I0723 15:22:16.429788   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.429800   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:16.429807   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:16.429865   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:16.463983   65605 cri.go:89] found id: ""
	I0723 15:22:16.464012   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.464023   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:16.464030   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:16.464099   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:16.497390   65605 cri.go:89] found id: ""
	I0723 15:22:16.497417   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.497428   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:16.497435   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:16.497496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:16.532460   65605 cri.go:89] found id: ""
	I0723 15:22:16.532491   65605 logs.go:276] 0 containers: []
	W0723 15:22:16.532502   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:16.532513   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:16.532525   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:16.584455   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:16.584492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:16.599205   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:16.599237   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:16.672183   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:16.672207   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:16.672221   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:16.748888   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:16.748923   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:12.132681   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.134314   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.634068   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:14.404961   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:16.406911   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:15.652813   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:17.653585   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.654123   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:19.286407   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:19.300815   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:19.300890   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:19.341088   65605 cri.go:89] found id: ""
	I0723 15:22:19.341122   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.341133   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:19.341140   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:19.341191   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:19.375597   65605 cri.go:89] found id: ""
	I0723 15:22:19.375627   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.375635   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:19.375641   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:19.375689   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:19.412206   65605 cri.go:89] found id: ""
	I0723 15:22:19.412234   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.412244   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:19.412252   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:19.412315   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:19.445598   65605 cri.go:89] found id: ""
	I0723 15:22:19.445631   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.445645   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:19.445653   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:19.445725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:19.477766   65605 cri.go:89] found id: ""
	I0723 15:22:19.477800   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.477811   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:19.477818   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:19.477877   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:19.509935   65605 cri.go:89] found id: ""
	I0723 15:22:19.509965   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.509976   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:19.509982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:19.510039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:19.542906   65605 cri.go:89] found id: ""
	I0723 15:22:19.542936   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.542947   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:19.542954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:19.543010   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:19.575935   65605 cri.go:89] found id: ""
	I0723 15:22:19.575964   65605 logs.go:276] 0 containers: []
	W0723 15:22:19.575975   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:19.576036   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:19.576054   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:19.625640   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:19.625674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:19.638938   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:19.638965   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:19.711019   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:19.711047   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:19.711061   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:19.787744   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:19.787781   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:19.133215   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.632570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:18.905104   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:21.404733   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.152487   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:24.154220   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:22.326500   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:22.339677   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:22.339741   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:22.374593   65605 cri.go:89] found id: ""
	I0723 15:22:22.374630   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.374641   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:22.374649   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:22.374713   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:22.408064   65605 cri.go:89] found id: ""
	I0723 15:22:22.408089   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.408099   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:22.408106   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:22.408166   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:22.442923   65605 cri.go:89] found id: ""
	I0723 15:22:22.442956   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.442968   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:22.442976   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:22.443038   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:22.476003   65605 cri.go:89] found id: ""
	I0723 15:22:22.476027   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.476036   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:22.476043   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:22.476109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:22.508221   65605 cri.go:89] found id: ""
	I0723 15:22:22.508253   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.508260   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:22.508268   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:22.508328   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:22.540748   65605 cri.go:89] found id: ""
	I0723 15:22:22.540778   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.540789   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:22.540797   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:22.540857   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:22.576000   65605 cri.go:89] found id: ""
	I0723 15:22:22.576028   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.576038   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:22.576044   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:22.576102   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:22.614295   65605 cri.go:89] found id: ""
	I0723 15:22:22.614325   65605 logs.go:276] 0 containers: []
	W0723 15:22:22.614335   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:22.614346   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:22.614361   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:22.627447   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:22.627481   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:22.701142   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:22.701172   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:22.701188   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:22.788487   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:22.788523   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:22.831107   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:22.831136   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.382886   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:25.396072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:25.396147   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:25.432414   65605 cri.go:89] found id: ""
	I0723 15:22:25.432443   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.432454   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:25.432482   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:25.432554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:25.466375   65605 cri.go:89] found id: ""
	I0723 15:22:25.466421   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.466429   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:25.466434   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:25.466488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:25.502512   65605 cri.go:89] found id: ""
	I0723 15:22:25.502536   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.502545   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:25.502553   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:25.502624   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:25.535953   65605 cri.go:89] found id: ""
	I0723 15:22:25.535975   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.535984   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:25.535991   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:25.536051   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:25.569217   65605 cri.go:89] found id: ""
	I0723 15:22:25.569250   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.569261   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:25.569269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:25.569331   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:25.602317   65605 cri.go:89] found id: ""
	I0723 15:22:25.602341   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.602350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:25.602360   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:25.602433   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:25.636959   65605 cri.go:89] found id: ""
	I0723 15:22:25.636984   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.636994   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:25.637001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:25.637059   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:25.671719   65605 cri.go:89] found id: ""
	I0723 15:22:25.671753   65605 logs.go:276] 0 containers: []
	W0723 15:22:25.671764   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:25.671775   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:25.671789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:25.720509   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:25.720540   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:25.733097   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:25.733121   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:25.809365   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:25.809393   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:25.809409   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:25.890663   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:25.890700   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:23.634537   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.133073   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:23.905075   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:25.905102   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:27.905390   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:26.653893   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.660981   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:28.430884   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:28.444825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:28.444882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:28.477510   65605 cri.go:89] found id: ""
	I0723 15:22:28.477533   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.477540   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:28.477546   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:28.477611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:28.515395   65605 cri.go:89] found id: ""
	I0723 15:22:28.515424   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.515434   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:28.515440   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:28.515498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:28.554144   65605 cri.go:89] found id: ""
	I0723 15:22:28.554169   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.554176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:28.554185   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:28.554239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:28.588756   65605 cri.go:89] found id: ""
	I0723 15:22:28.588783   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.588794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:28.588801   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:28.588861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:28.623278   65605 cri.go:89] found id: ""
	I0723 15:22:28.623305   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.623313   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:28.623318   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:28.623372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:28.666802   65605 cri.go:89] found id: ""
	I0723 15:22:28.666831   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.666840   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:28.666847   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:28.666906   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:28.697712   65605 cri.go:89] found id: ""
	I0723 15:22:28.697736   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.697744   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:28.697749   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:28.697803   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:28.730296   65605 cri.go:89] found id: ""
	I0723 15:22:28.730333   65605 logs.go:276] 0 containers: []
	W0723 15:22:28.730340   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:28.730349   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:28.730360   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.779381   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:28.779417   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:28.792687   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:28.792718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:28.859483   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:28.859508   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:28.859537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:28.933792   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:28.933824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.474653   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:31.488537   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:31.488602   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:31.522785   65605 cri.go:89] found id: ""
	I0723 15:22:31.522816   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.522826   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:31.522834   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:31.522901   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:31.554448   65605 cri.go:89] found id: ""
	I0723 15:22:31.554493   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.554503   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:31.554508   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:31.554568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:31.587456   65605 cri.go:89] found id: ""
	I0723 15:22:31.587479   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.587486   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:31.587492   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:31.587549   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:31.625604   65605 cri.go:89] found id: ""
	I0723 15:22:31.625632   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.625640   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:31.625646   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:31.625696   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:31.661266   65605 cri.go:89] found id: ""
	I0723 15:22:31.661298   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.661304   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:31.661309   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:31.661364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:31.696942   65605 cri.go:89] found id: ""
	I0723 15:22:31.696974   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.696984   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:31.696992   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:31.697055   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:31.730706   65605 cri.go:89] found id: ""
	I0723 15:22:31.730730   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.730738   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:31.730743   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:31.730789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:31.762778   65605 cri.go:89] found id: ""
	I0723 15:22:31.762802   65605 logs.go:276] 0 containers: []
	W0723 15:22:31.762810   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:31.762818   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:31.762829   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:31.804789   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:31.804814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:28.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:30.133732   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:29.906482   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:32.404579   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.152594   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:33.154059   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:31.854481   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:31.854514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:31.867003   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:31.867028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:31.942544   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:31.942565   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:31.942576   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.519437   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:34.531879   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:34.531941   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:34.565547   65605 cri.go:89] found id: ""
	I0723 15:22:34.565572   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.565580   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:34.565585   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:34.565634   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:34.597865   65605 cri.go:89] found id: ""
	I0723 15:22:34.597892   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.597902   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:34.597908   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:34.597968   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:34.633153   65605 cri.go:89] found id: ""
	I0723 15:22:34.633176   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.633185   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:34.633192   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:34.633251   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:34.668464   65605 cri.go:89] found id: ""
	I0723 15:22:34.668486   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.668496   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:34.668502   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:34.668573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:34.700358   65605 cri.go:89] found id: ""
	I0723 15:22:34.700401   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.700412   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:34.700422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:34.700495   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:34.731774   65605 cri.go:89] found id: ""
	I0723 15:22:34.731807   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.731819   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:34.731828   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:34.731902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:34.764204   65605 cri.go:89] found id: ""
	I0723 15:22:34.764232   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.764243   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:34.764251   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:34.764311   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:34.794103   65605 cri.go:89] found id: ""
	I0723 15:22:34.794131   65605 logs.go:276] 0 containers: []
	W0723 15:22:34.794139   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:34.794149   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:34.794165   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:34.868038   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:34.868063   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:34.868076   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:34.958254   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:34.958291   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:35.004649   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:35.004681   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:35.055496   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:35.055537   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:32.632017   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.634515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:34.405341   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:36.905094   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:35.652935   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.654130   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:40.153533   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:37.569938   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:37.582561   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:37.582629   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:37.613053   65605 cri.go:89] found id: ""
	I0723 15:22:37.613081   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.613090   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:37.613096   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:37.613161   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:37.649239   65605 cri.go:89] found id: ""
	I0723 15:22:37.649270   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.649279   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:37.649286   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:37.649372   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:37.685110   65605 cri.go:89] found id: ""
	I0723 15:22:37.685137   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.685145   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:37.685150   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:37.685201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:37.718210   65605 cri.go:89] found id: ""
	I0723 15:22:37.718231   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.718239   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:37.718245   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:37.718297   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:37.751192   65605 cri.go:89] found id: ""
	I0723 15:22:37.751224   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.751234   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:37.751241   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:37.751294   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:37.781569   65605 cri.go:89] found id: ""
	I0723 15:22:37.781597   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.781607   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:37.781614   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:37.781680   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:37.812886   65605 cri.go:89] found id: ""
	I0723 15:22:37.812916   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.812927   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:37.812934   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:37.812994   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:37.844065   65605 cri.go:89] found id: ""
	I0723 15:22:37.844094   65605 logs.go:276] 0 containers: []
	W0723 15:22:37.844104   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:37.844114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:37.844128   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.857216   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:37.857244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:37.926781   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:37.926807   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:37.926824   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:38.007510   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:38.007544   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:38.045404   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:38.045437   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:40.594590   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:40.607099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:40.607157   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:40.660888   65605 cri.go:89] found id: ""
	I0723 15:22:40.660915   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.660926   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:40.660933   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:40.660992   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:40.698276   65605 cri.go:89] found id: ""
	I0723 15:22:40.698302   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.698310   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:40.698317   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:40.698411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:40.733515   65605 cri.go:89] found id: ""
	I0723 15:22:40.733542   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.733552   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:40.733560   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:40.733619   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:40.765501   65605 cri.go:89] found id: ""
	I0723 15:22:40.765530   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.765541   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:40.765548   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:40.765600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:40.800660   65605 cri.go:89] found id: ""
	I0723 15:22:40.800686   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.800693   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:40.800698   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:40.800744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:40.836084   65605 cri.go:89] found id: ""
	I0723 15:22:40.836111   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.836119   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:40.836125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:40.836179   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:40.872567   65605 cri.go:89] found id: ""
	I0723 15:22:40.872593   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.872601   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:40.872607   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:40.872665   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:40.907965   65605 cri.go:89] found id: ""
	I0723 15:22:40.907995   65605 logs.go:276] 0 containers: []
	W0723 15:22:40.908006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:40.908017   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:40.908032   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:40.977078   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:40.977105   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:40.977124   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:41.059589   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:41.059634   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:41.097934   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:41.097968   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:41.151322   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:41.151365   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:37.133207   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.133345   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.633631   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:39.407087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:41.904675   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:42.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:44.653650   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.665956   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:43.678808   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:43.678882   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:43.711311   65605 cri.go:89] found id: ""
	I0723 15:22:43.711346   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.711356   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:43.711363   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:43.711415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:43.745203   65605 cri.go:89] found id: ""
	I0723 15:22:43.745226   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.745233   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:43.745239   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:43.745303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:43.778815   65605 cri.go:89] found id: ""
	I0723 15:22:43.778851   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.778861   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:43.778868   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:43.778926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:43.812497   65605 cri.go:89] found id: ""
	I0723 15:22:43.812528   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.812538   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:43.812544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:43.812595   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:43.849568   65605 cri.go:89] found id: ""
	I0723 15:22:43.849595   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.849607   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:43.849621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:43.849784   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:43.883486   65605 cri.go:89] found id: ""
	I0723 15:22:43.883515   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.883527   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:43.883535   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:43.883603   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:43.917301   65605 cri.go:89] found id: ""
	I0723 15:22:43.917321   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.917328   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:43.917333   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:43.917388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:43.951808   65605 cri.go:89] found id: ""
	I0723 15:22:43.951835   65605 logs.go:276] 0 containers: []
	W0723 15:22:43.951844   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:43.951853   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:43.951864   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:44.001416   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:44.001448   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:44.014680   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:44.014708   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:44.086008   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:44.086033   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:44.086048   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:44.174647   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:44.174679   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:46.716916   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:46.730403   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:46.730473   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:46.765297   65605 cri.go:89] found id: ""
	I0723 15:22:46.765332   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.765348   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:46.765355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:46.765417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:46.798193   65605 cri.go:89] found id: ""
	I0723 15:22:46.798225   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.798235   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:46.798242   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:46.798309   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:46.830977   65605 cri.go:89] found id: ""
	I0723 15:22:46.831003   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.831015   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:46.831022   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:46.831093   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:44.135515   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.633440   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:43.905132   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.404399   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.655329   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.660172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:46.867414   65605 cri.go:89] found id: ""
	I0723 15:22:46.867441   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.867452   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:46.867459   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:46.867524   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:46.903782   65605 cri.go:89] found id: ""
	I0723 15:22:46.903810   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.903823   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:46.903830   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:46.903912   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:46.936451   65605 cri.go:89] found id: ""
	I0723 15:22:46.936479   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.936486   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:46.936491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:46.936538   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:46.970263   65605 cri.go:89] found id: ""
	I0723 15:22:46.970289   65605 logs.go:276] 0 containers: []
	W0723 15:22:46.970297   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:46.970302   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:46.970370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:47.005023   65605 cri.go:89] found id: ""
	I0723 15:22:47.005055   65605 logs.go:276] 0 containers: []
	W0723 15:22:47.005065   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:47.005074   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:47.005087   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:47.102350   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:47.102398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:47.102432   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:47.194243   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:47.194277   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:47.235510   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:47.235543   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:47.285177   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:47.285208   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:49.799825   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:49.813159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:49.813218   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:49.844937   65605 cri.go:89] found id: ""
	I0723 15:22:49.844966   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.844974   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:49.844979   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:49.845039   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:49.880236   65605 cri.go:89] found id: ""
	I0723 15:22:49.880265   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.880276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:49.880283   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:49.880344   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:49.914260   65605 cri.go:89] found id: ""
	I0723 15:22:49.914289   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.914298   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:49.914306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:49.914360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:49.948948   65605 cri.go:89] found id: ""
	I0723 15:22:49.948979   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.948987   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:49.948994   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:49.949049   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:49.982841   65605 cri.go:89] found id: ""
	I0723 15:22:49.982867   65605 logs.go:276] 0 containers: []
	W0723 15:22:49.982876   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:49.982881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:49.982926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:50.018255   65605 cri.go:89] found id: ""
	I0723 15:22:50.018286   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.018297   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:50.018315   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:50.018366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:50.054476   65605 cri.go:89] found id: ""
	I0723 15:22:50.054505   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.054515   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:50.054521   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:50.054582   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:50.088017   65605 cri.go:89] found id: ""
	I0723 15:22:50.088050   65605 logs.go:276] 0 containers: []
	W0723 15:22:50.088060   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:50.088072   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:50.088086   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:50.140460   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:50.140494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:50.155334   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:50.155371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:50.230361   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:50.230401   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:50.230419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:50.307742   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:50.307789   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:48.635238   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.133390   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:48.406535   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:50.904921   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.905910   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:51.152686   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:53.153547   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:52.847520   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:52.868334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:52.868400   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:52.905903   65605 cri.go:89] found id: ""
	I0723 15:22:52.905930   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.905941   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:52.905948   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:52.906006   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:52.940644   65605 cri.go:89] found id: ""
	I0723 15:22:52.940672   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.940683   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:52.940690   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:52.940752   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:52.973581   65605 cri.go:89] found id: ""
	I0723 15:22:52.973607   65605 logs.go:276] 0 containers: []
	W0723 15:22:52.973615   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:52.973621   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:52.973682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:53.007004   65605 cri.go:89] found id: ""
	I0723 15:22:53.007032   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.007040   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:53.007046   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:53.007100   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:53.040346   65605 cri.go:89] found id: ""
	I0723 15:22:53.040374   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.040385   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:53.040392   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:53.040455   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:53.073620   65605 cri.go:89] found id: ""
	I0723 15:22:53.073653   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.073662   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:53.073668   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:53.073717   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:53.108895   65605 cri.go:89] found id: ""
	I0723 15:22:53.108929   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.108941   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:53.108949   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:53.109014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:53.144145   65605 cri.go:89] found id: ""
	I0723 15:22:53.144171   65605 logs.go:276] 0 containers: []
	W0723 15:22:53.144179   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:53.144190   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:53.144207   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:53.181580   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:53.181617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:53.235261   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:53.235292   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:53.249317   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:53.249352   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:53.317382   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:53.317403   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:53.317419   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:55.899766   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:55.913612   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:55.913685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:55.945832   65605 cri.go:89] found id: ""
	I0723 15:22:55.945865   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.945877   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:55.945884   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:55.945939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:55.977489   65605 cri.go:89] found id: ""
	I0723 15:22:55.977522   65605 logs.go:276] 0 containers: []
	W0723 15:22:55.977533   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:55.977546   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:55.977607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:56.011727   65605 cri.go:89] found id: ""
	I0723 15:22:56.011758   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.011770   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:56.011781   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:56.011850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:56.044046   65605 cri.go:89] found id: ""
	I0723 15:22:56.044076   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.044086   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:56.044093   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:56.044148   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:56.078615   65605 cri.go:89] found id: ""
	I0723 15:22:56.078638   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.078644   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:56.078649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:56.078702   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:56.112720   65605 cri.go:89] found id: ""
	I0723 15:22:56.112746   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.112754   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:56.112759   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:56.112807   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:56.146436   65605 cri.go:89] found id: ""
	I0723 15:22:56.146464   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.146475   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:56.146483   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:56.146545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:56.179819   65605 cri.go:89] found id: ""
	I0723 15:22:56.179850   65605 logs.go:276] 0 containers: []
	W0723 15:22:56.179859   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:56.179868   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:56.179885   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:56.219608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:56.219636   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:56.268158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:56.268192   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:56.281422   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:56.281449   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:56.351169   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:56.351190   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:56.351206   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:53.133444   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.632360   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.404787   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.905423   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:55.652504   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:57.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.655049   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:58.933585   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:22:58.946516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:22:58.946607   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:22:58.980970   65605 cri.go:89] found id: ""
	I0723 15:22:58.980994   65605 logs.go:276] 0 containers: []
	W0723 15:22:58.981004   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:22:58.981012   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:22:58.981083   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:22:59.019301   65605 cri.go:89] found id: ""
	I0723 15:22:59.019337   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.019352   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:22:59.019360   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:22:59.019417   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:22:59.053653   65605 cri.go:89] found id: ""
	I0723 15:22:59.053677   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.053685   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:22:59.053690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:22:59.053745   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:22:59.086737   65605 cri.go:89] found id: ""
	I0723 15:22:59.086764   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.086772   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:22:59.086778   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:22:59.086833   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:22:59.120689   65605 cri.go:89] found id: ""
	I0723 15:22:59.120717   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.120725   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:22:59.120731   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:22:59.120793   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:22:59.157267   65605 cri.go:89] found id: ""
	I0723 15:22:59.157305   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.157313   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:22:59.157319   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:22:59.157370   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:22:59.193432   65605 cri.go:89] found id: ""
	I0723 15:22:59.193457   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.193468   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:22:59.193474   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:22:59.193518   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:22:59.227501   65605 cri.go:89] found id: ""
	I0723 15:22:59.227528   65605 logs.go:276] 0 containers: []
	W0723 15:22:59.227535   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:22:59.227544   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:22:59.227555   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:22:59.314420   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:22:59.314465   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:22:59.354311   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:22:59.354354   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:22:59.406158   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:22:59.406189   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:22:59.419244   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:22:59.419270   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:22:59.494399   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:22:57.632469   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:00.133084   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:22:59.905483   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.406340   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:02.154105   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.655454   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:01.995403   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:02.008395   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:02.008459   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:02.041952   65605 cri.go:89] found id: ""
	I0723 15:23:02.041979   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.041989   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:02.041995   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:02.042061   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:02.079353   65605 cri.go:89] found id: ""
	I0723 15:23:02.079383   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.079390   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:02.079397   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:02.079453   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:02.114222   65605 cri.go:89] found id: ""
	I0723 15:23:02.114251   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.114261   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:02.114269   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:02.114350   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:02.146563   65605 cri.go:89] found id: ""
	I0723 15:23:02.146591   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.146603   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:02.146610   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:02.146675   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:02.184401   65605 cri.go:89] found id: ""
	I0723 15:23:02.184428   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.184436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:02.184442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:02.184489   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:02.221304   65605 cri.go:89] found id: ""
	I0723 15:23:02.221339   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.221350   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:02.221358   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:02.221424   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:02.266255   65605 cri.go:89] found id: ""
	I0723 15:23:02.266280   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.266288   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:02.266308   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:02.266364   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:02.302038   65605 cri.go:89] found id: ""
	I0723 15:23:02.302064   65605 logs.go:276] 0 containers: []
	W0723 15:23:02.302075   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:02.302085   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:02.302102   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.352709   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:02.352743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:02.366113   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:02.366141   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:02.433621   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:02.433658   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:02.433674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:02.512443   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:02.512479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.051227   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:05.063634   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:05.063704   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:05.099833   65605 cri.go:89] found id: ""
	I0723 15:23:05.099862   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.099872   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:05.099880   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:05.099942   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:05.136009   65605 cri.go:89] found id: ""
	I0723 15:23:05.136030   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.136036   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:05.136042   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:05.136089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:05.171390   65605 cri.go:89] found id: ""
	I0723 15:23:05.171423   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.171434   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:05.171441   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:05.171497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:05.210193   65605 cri.go:89] found id: ""
	I0723 15:23:05.210220   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.210229   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:05.210236   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:05.210318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:05.243266   65605 cri.go:89] found id: ""
	I0723 15:23:05.243290   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.243298   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:05.243304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:05.243368   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:05.273795   65605 cri.go:89] found id: ""
	I0723 15:23:05.273826   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.273835   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:05.273842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:05.273918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:05.305498   65605 cri.go:89] found id: ""
	I0723 15:23:05.305521   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.305528   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:05.305533   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:05.305587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:05.337867   65605 cri.go:89] found id: ""
	I0723 15:23:05.337894   65605 logs.go:276] 0 containers: []
	W0723 15:23:05.337905   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:05.337917   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:05.337934   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:05.353531   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:05.353564   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:05.419605   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:05.419630   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:05.419644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:05.503361   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:05.503395   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:05.539514   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:05.539547   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:02.633357   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.633516   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:04.904960   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.913789   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:06.657437   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.660064   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:08.091151   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:08.103930   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:08.104007   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:08.136853   65605 cri.go:89] found id: ""
	I0723 15:23:08.136874   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.136881   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:08.136887   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:08.136940   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:08.171525   65605 cri.go:89] found id: ""
	I0723 15:23:08.171556   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.171577   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:08.171584   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:08.171652   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:08.205887   65605 cri.go:89] found id: ""
	I0723 15:23:08.205919   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.205930   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:08.205940   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:08.206001   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:08.238304   65605 cri.go:89] found id: ""
	I0723 15:23:08.238329   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.238337   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:08.238342   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:08.238411   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:08.270162   65605 cri.go:89] found id: ""
	I0723 15:23:08.270194   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.270203   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:08.270211   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:08.270273   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:08.312963   65605 cri.go:89] found id: ""
	I0723 15:23:08.312991   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.312999   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:08.313005   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:08.313065   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:08.345211   65605 cri.go:89] found id: ""
	I0723 15:23:08.345246   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.345258   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:08.345267   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:08.345326   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:08.381355   65605 cri.go:89] found id: ""
	I0723 15:23:08.381390   65605 logs.go:276] 0 containers: []
	W0723 15:23:08.381399   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:08.381409   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:08.381421   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:08.436680   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:08.436718   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:08.450210   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:08.450245   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:08.517469   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:08.517490   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:08.517504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:08.603147   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:08.603185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:11.142363   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:11.158204   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:11.158278   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:11.197181   65605 cri.go:89] found id: ""
	I0723 15:23:11.197211   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.197227   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:11.197234   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:11.197302   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:11.232698   65605 cri.go:89] found id: ""
	I0723 15:23:11.232726   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.232736   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:11.232742   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:11.232801   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:11.263268   65605 cri.go:89] found id: ""
	I0723 15:23:11.263293   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.263301   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:11.263306   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:11.263363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:11.294213   65605 cri.go:89] found id: ""
	I0723 15:23:11.294242   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.294254   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:11.294261   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:11.294340   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:11.324721   65605 cri.go:89] found id: ""
	I0723 15:23:11.324753   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.324766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:11.324773   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:11.324834   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:11.356563   65605 cri.go:89] found id: ""
	I0723 15:23:11.356595   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.356606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:11.356620   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:11.356685   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:11.387818   65605 cri.go:89] found id: ""
	I0723 15:23:11.387850   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.387859   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:11.387866   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:11.387926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:11.422612   65605 cri.go:89] found id: ""
	I0723 15:23:11.422639   65605 logs.go:276] 0 containers: []
	W0723 15:23:11.422649   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:11.422659   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:11.422672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:11.475997   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:11.476028   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:11.489064   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:11.489095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:11.557384   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:11.557408   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:11.557427   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:11.636906   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:11.636933   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:07.134834   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.636699   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:09.405125   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.406702   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:11.153281   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.153390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.154674   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.176790   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:14.190898   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:14.190972   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:14.225264   65605 cri.go:89] found id: ""
	I0723 15:23:14.225297   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.225308   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:14.225314   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:14.225378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:14.257092   65605 cri.go:89] found id: ""
	I0723 15:23:14.257119   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.257132   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:14.257138   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:14.257201   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:14.291068   65605 cri.go:89] found id: ""
	I0723 15:23:14.291095   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.291104   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:14.291111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:14.291170   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:14.324840   65605 cri.go:89] found id: ""
	I0723 15:23:14.324872   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.324881   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:14.324888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:14.324948   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:14.358228   65605 cri.go:89] found id: ""
	I0723 15:23:14.358258   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.358268   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:14.358275   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:14.358333   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:14.389136   65605 cri.go:89] found id: ""
	I0723 15:23:14.389164   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.389174   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:14.389181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:14.389241   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:14.424386   65605 cri.go:89] found id: ""
	I0723 15:23:14.424413   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.424424   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:14.424432   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:14.424492   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:14.457206   65605 cri.go:89] found id: ""
	I0723 15:23:14.457234   65605 logs.go:276] 0 containers: []
	W0723 15:23:14.457244   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:14.457254   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:14.457265   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:14.535708   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:14.535742   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:14.573579   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:14.573603   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:14.627966   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:14.627994   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:14.641305   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:14.641332   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:14.723499   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:12.133966   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:14.633521   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:16.633785   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:13.905045   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:15.905186   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.653465   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:19.653755   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:17.224268   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:17.236467   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:17.236530   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:17.269668   65605 cri.go:89] found id: ""
	I0723 15:23:17.269697   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.269704   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:17.269709   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:17.269753   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:17.300573   65605 cri.go:89] found id: ""
	I0723 15:23:17.300596   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.300603   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:17.300608   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:17.300655   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:17.332627   65605 cri.go:89] found id: ""
	I0723 15:23:17.332653   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.332661   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:17.332666   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:17.332716   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:17.363759   65605 cri.go:89] found id: ""
	I0723 15:23:17.363786   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.363794   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:17.363799   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:17.363854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:17.396986   65605 cri.go:89] found id: ""
	I0723 15:23:17.397016   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.397023   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:17.397031   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:17.397089   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:17.435454   65605 cri.go:89] found id: ""
	I0723 15:23:17.435478   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.435488   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:17.435495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:17.435551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:17.469529   65605 cri.go:89] found id: ""
	I0723 15:23:17.469570   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.469581   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:17.469589   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:17.469654   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:17.505356   65605 cri.go:89] found id: ""
	I0723 15:23:17.505384   65605 logs.go:276] 0 containers: []
	W0723 15:23:17.505395   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:17.505405   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:17.505420   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:17.548656   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:17.548682   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:17.602439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:17.602471   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:17.614872   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:17.614902   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:17.684914   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:17.684939   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:17.684958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.271384   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:20.284619   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:20.284682   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:20.319522   65605 cri.go:89] found id: ""
	I0723 15:23:20.319545   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.319552   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:20.319557   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:20.319608   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:20.357359   65605 cri.go:89] found id: ""
	I0723 15:23:20.357385   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.357393   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:20.357399   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:20.357444   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:20.390651   65605 cri.go:89] found id: ""
	I0723 15:23:20.390680   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.390692   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:20.390699   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:20.390757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:20.425243   65605 cri.go:89] found id: ""
	I0723 15:23:20.425274   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.425288   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:20.425295   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:20.425367   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:20.459665   65605 cri.go:89] found id: ""
	I0723 15:23:20.459687   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.459694   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:20.459700   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:20.459749   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:20.494836   65605 cri.go:89] found id: ""
	I0723 15:23:20.494869   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.494879   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:20.494887   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:20.494946   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:20.528807   65605 cri.go:89] found id: ""
	I0723 15:23:20.528839   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.528847   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:20.528854   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:20.528904   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:20.563111   65605 cri.go:89] found id: ""
	I0723 15:23:20.563139   65605 logs.go:276] 0 containers: []
	W0723 15:23:20.563148   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:20.563160   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:20.563175   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:20.576259   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:20.576290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:20.641528   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:20.641551   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:20.641565   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:20.717413   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:20.717452   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:20.756832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:20.756858   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:19.133570   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:21.133680   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:18.404406   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:20.405712   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.904785   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:22.153273   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:24.654959   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:23.308839   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:23.322122   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:23.322203   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:23.353454   65605 cri.go:89] found id: ""
	I0723 15:23:23.353483   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.353491   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:23.353496   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:23.353550   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:23.385194   65605 cri.go:89] found id: ""
	I0723 15:23:23.385218   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.385226   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:23.385231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:23.385286   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:23.420259   65605 cri.go:89] found id: ""
	I0723 15:23:23.420287   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.420295   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:23.420301   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:23.420366   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:23.453107   65605 cri.go:89] found id: ""
	I0723 15:23:23.453134   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.453145   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:23.453152   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:23.453208   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:23.485147   65605 cri.go:89] found id: ""
	I0723 15:23:23.485178   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.485185   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:23.485191   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:23.485239   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:23.516682   65605 cri.go:89] found id: ""
	I0723 15:23:23.516709   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.516721   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:23.516729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:23.516855   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:23.552804   65605 cri.go:89] found id: ""
	I0723 15:23:23.552836   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.552846   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:23.552853   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:23.552916   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:23.585951   65605 cri.go:89] found id: ""
	I0723 15:23:23.585977   65605 logs.go:276] 0 containers: []
	W0723 15:23:23.585988   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:23.586000   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:23.586014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.641439   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:23.641469   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:23.655213   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:23.655243   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:23.726461   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:23.726482   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:23.726496   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:23.806530   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:23.806572   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.346727   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:26.359785   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:26.359854   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:26.394547   65605 cri.go:89] found id: ""
	I0723 15:23:26.394583   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.394593   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:26.394600   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:26.394660   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:26.429602   65605 cri.go:89] found id: ""
	I0723 15:23:26.429632   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.429640   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:26.429646   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:26.429735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:26.461875   65605 cri.go:89] found id: ""
	I0723 15:23:26.461902   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.461909   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:26.461916   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:26.461987   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:26.494721   65605 cri.go:89] found id: ""
	I0723 15:23:26.494743   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.494751   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:26.494756   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:26.494802   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:26.530828   65605 cri.go:89] found id: ""
	I0723 15:23:26.530854   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.530863   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:26.530871   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:26.530939   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:26.564508   65605 cri.go:89] found id: ""
	I0723 15:23:26.564540   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.564551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:26.564558   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:26.564618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:26.599354   65605 cri.go:89] found id: ""
	I0723 15:23:26.599378   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.599387   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:26.599393   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:26.599460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:26.654360   65605 cri.go:89] found id: ""
	I0723 15:23:26.654409   65605 logs.go:276] 0 containers: []
	W0723 15:23:26.654420   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:26.654429   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:26.654446   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:26.722180   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:26.722212   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:26.722226   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:26.803291   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:26.803324   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:26.842829   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:26.842860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:23.633887   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.133371   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:25.406139   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:27.905699   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.656334   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:29.153898   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:26.896814   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:26.896854   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.411463   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:29.424509   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:29.424574   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:29.458014   65605 cri.go:89] found id: ""
	I0723 15:23:29.458042   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.458049   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:29.458055   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:29.458108   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:29.492762   65605 cri.go:89] found id: ""
	I0723 15:23:29.492792   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.492802   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:29.492809   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:29.492862   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:29.526807   65605 cri.go:89] found id: ""
	I0723 15:23:29.526840   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.526851   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:29.526858   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:29.526922   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:29.560110   65605 cri.go:89] found id: ""
	I0723 15:23:29.560133   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.560140   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:29.560146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:29.560195   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:29.596287   65605 cri.go:89] found id: ""
	I0723 15:23:29.596317   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.596327   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:29.596334   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:29.596389   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:29.629292   65605 cri.go:89] found id: ""
	I0723 15:23:29.629338   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.629345   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:29.629353   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:29.629404   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:29.666018   65605 cri.go:89] found id: ""
	I0723 15:23:29.666048   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.666058   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:29.666065   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:29.666131   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:29.699967   65605 cri.go:89] found id: ""
	I0723 15:23:29.699996   65605 logs.go:276] 0 containers: []
	W0723 15:23:29.700006   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:29.700018   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:29.700034   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:29.749759   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:29.749792   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:29.763116   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:29.763142   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:29.836309   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:29.836332   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:29.836343   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:29.916337   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:29.916371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:28.633677   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.132726   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:30.405168   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.905063   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:31.653297   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:33.653432   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:32.463927   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:32.477072   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:32.477150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:32.509915   65605 cri.go:89] found id: ""
	I0723 15:23:32.509938   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.509945   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:32.509952   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:32.510000   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:32.543302   65605 cri.go:89] found id: ""
	I0723 15:23:32.543344   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.543360   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:32.543368   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:32.543438   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:32.579516   65605 cri.go:89] found id: ""
	I0723 15:23:32.579544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.579555   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:32.579562   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:32.579621   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:32.613175   65605 cri.go:89] found id: ""
	I0723 15:23:32.613210   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.613218   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:32.613224   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:32.613282   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:32.646801   65605 cri.go:89] found id: ""
	I0723 15:23:32.646826   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.646835   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:32.646842   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:32.646902   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:32.683518   65605 cri.go:89] found id: ""
	I0723 15:23:32.683544   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.683551   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:32.683556   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:32.683611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:32.719448   65605 cri.go:89] found id: ""
	I0723 15:23:32.719475   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.719485   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:32.719490   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:32.719568   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:32.752706   65605 cri.go:89] found id: ""
	I0723 15:23:32.752731   65605 logs.go:276] 0 containers: []
	W0723 15:23:32.752738   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:32.752747   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:32.752757   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:32.800191   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:32.800220   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:32.850990   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:32.851025   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:32.863700   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:32.863729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:32.928054   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:32.928080   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:32.928095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:35.507452   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:35.520681   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:35.520760   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:35.554642   65605 cri.go:89] found id: ""
	I0723 15:23:35.554668   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.554680   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:35.554687   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:35.554750   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:35.585970   65605 cri.go:89] found id: ""
	I0723 15:23:35.585994   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.586004   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:35.586011   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:35.586069   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:35.625178   65605 cri.go:89] found id: ""
	I0723 15:23:35.625202   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.625212   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:35.625226   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:35.625274   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:35.658618   65605 cri.go:89] found id: ""
	I0723 15:23:35.658647   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.658666   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:35.658682   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:35.658742   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:35.696724   65605 cri.go:89] found id: ""
	I0723 15:23:35.696760   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.696768   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:35.696774   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:35.696825   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:35.728399   65605 cri.go:89] found id: ""
	I0723 15:23:35.728426   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.728435   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:35.728440   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:35.728496   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:35.758374   65605 cri.go:89] found id: ""
	I0723 15:23:35.758419   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.758429   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:35.758436   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:35.758497   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:35.789013   65605 cri.go:89] found id: ""
	I0723 15:23:35.789041   65605 logs.go:276] 0 containers: []
	W0723 15:23:35.789050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:35.789058   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:35.789069   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:35.843703   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:35.843739   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:35.856489   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:35.856514   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:35.926784   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:35.926804   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:35.926819   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:36.009552   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:36.009591   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:33.632247   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.633037   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.404984   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:37.905720   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:35.653742   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.154008   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:38.545830   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:38.560412   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:38.560491   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:38.596495   65605 cri.go:89] found id: ""
	I0723 15:23:38.596521   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.596532   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:38.596538   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:38.596587   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:38.635068   65605 cri.go:89] found id: ""
	I0723 15:23:38.635095   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.635104   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:38.635109   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:38.635180   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:38.675832   65605 cri.go:89] found id: ""
	I0723 15:23:38.675876   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.675891   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:38.675897   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:38.675956   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:38.711052   65605 cri.go:89] found id: ""
	I0723 15:23:38.711080   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.711100   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:38.711108   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:38.711171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:38.749437   65605 cri.go:89] found id: ""
	I0723 15:23:38.749479   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.749490   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:38.749498   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:38.749554   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:38.790721   65605 cri.go:89] found id: ""
	I0723 15:23:38.790743   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.790751   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:38.790758   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:38.790818   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:38.840127   65605 cri.go:89] found id: ""
	I0723 15:23:38.840156   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.840167   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:38.840174   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:38.840233   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:38.895252   65605 cri.go:89] found id: ""
	I0723 15:23:38.895281   65605 logs.go:276] 0 containers: []
	W0723 15:23:38.895291   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:38.895301   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:38.895317   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:38.933441   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:38.933479   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:38.987128   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:38.987160   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:39.001547   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:39.001578   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:39.070363   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:39.070398   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:39.070413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:41.648668   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:41.664247   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:41.664303   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:41.697926   65605 cri.go:89] found id: ""
	I0723 15:23:41.697954   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.697962   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:41.697967   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:41.698014   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:41.735306   65605 cri.go:89] found id: ""
	I0723 15:23:41.735336   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.735347   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:41.735355   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:41.735413   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:41.773005   65605 cri.go:89] found id: ""
	I0723 15:23:41.773030   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.773040   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:41.773047   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:41.773105   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:41.806683   65605 cri.go:89] found id: ""
	I0723 15:23:41.806711   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.806722   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:41.806729   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:41.806779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:41.842021   65605 cri.go:89] found id: ""
	I0723 15:23:41.842047   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.842063   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:41.842070   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:41.842130   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:37.633918   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.132895   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:39.906489   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.405244   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:40.652778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:42.656127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:45.155065   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:41.874772   65605 cri.go:89] found id: ""
	I0723 15:23:41.874802   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.874812   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:41.874819   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:41.874883   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:41.908618   65605 cri.go:89] found id: ""
	I0723 15:23:41.908643   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.908651   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:41.908656   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:41.908705   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:41.942529   65605 cri.go:89] found id: ""
	I0723 15:23:41.942562   65605 logs.go:276] 0 containers: []
	W0723 15:23:41.942573   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:41.942586   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:41.942601   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:41.995763   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:41.995820   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:42.009263   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:42.009290   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:42.076948   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:42.076970   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:42.076989   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:42.157399   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:42.157442   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:44.699439   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:44.712779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:44.712850   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:44.746666   65605 cri.go:89] found id: ""
	I0723 15:23:44.746692   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.746701   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:44.746713   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:44.746775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:44.780144   65605 cri.go:89] found id: ""
	I0723 15:23:44.780171   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.780178   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:44.780184   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:44.780240   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:44.816646   65605 cri.go:89] found id: ""
	I0723 15:23:44.816676   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.816688   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:44.816696   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:44.816830   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:44.848830   65605 cri.go:89] found id: ""
	I0723 15:23:44.848860   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.848873   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:44.848880   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:44.848945   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:44.882216   65605 cri.go:89] found id: ""
	I0723 15:23:44.882252   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.882265   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:44.882274   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:44.882363   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:44.915894   65605 cri.go:89] found id: ""
	I0723 15:23:44.915921   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.915930   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:44.915937   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:44.916003   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:44.948902   65605 cri.go:89] found id: ""
	I0723 15:23:44.948936   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.948954   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:44.948964   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:44.949034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:44.981658   65605 cri.go:89] found id: ""
	I0723 15:23:44.981685   65605 logs.go:276] 0 containers: []
	W0723 15:23:44.981698   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:44.981709   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:44.981724   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:45.034030   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:45.034063   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:45.047545   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:45.047577   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:45.113885   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:45.113905   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:45.113917   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:45.195865   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:45.195907   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:42.133464   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.633278   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.633730   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:44.406233   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:46.904918   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.156318   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:49.653208   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:47.740466   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:47.752890   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:47.752958   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:47.786124   65605 cri.go:89] found id: ""
	I0723 15:23:47.786149   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.786157   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:47.786162   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:47.786211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:47.818051   65605 cri.go:89] found id: ""
	I0723 15:23:47.818073   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.818081   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:47.818086   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:47.818134   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:47.854144   65605 cri.go:89] found id: ""
	I0723 15:23:47.854168   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.854176   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:47.854181   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:47.854226   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:47.885781   65605 cri.go:89] found id: ""
	I0723 15:23:47.885809   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.885819   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:47.885826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:47.885888   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:47.917809   65605 cri.go:89] found id: ""
	I0723 15:23:47.917840   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.917850   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:47.917857   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:47.917921   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:47.950041   65605 cri.go:89] found id: ""
	I0723 15:23:47.950069   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.950078   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:47.950085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:47.950145   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:47.983108   65605 cri.go:89] found id: ""
	I0723 15:23:47.983143   65605 logs.go:276] 0 containers: []
	W0723 15:23:47.983154   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:47.983163   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:47.983232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:48.014560   65605 cri.go:89] found id: ""
	I0723 15:23:48.014604   65605 logs.go:276] 0 containers: []
	W0723 15:23:48.014612   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:48.014621   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:48.014638   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:48.027469   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:48.027494   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:48.097571   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:48.097601   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:48.097615   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:48.178586   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:48.178618   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:48.215769   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:48.215794   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:50.768087   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:50.781396   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:50.781467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:50.817297   65605 cri.go:89] found id: ""
	I0723 15:23:50.817327   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.817335   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:50.817341   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:50.817388   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:50.850439   65605 cri.go:89] found id: ""
	I0723 15:23:50.850467   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.850476   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:50.850483   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:50.850552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:50.884601   65605 cri.go:89] found id: ""
	I0723 15:23:50.884630   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.884641   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:50.884649   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:50.884714   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:50.918971   65605 cri.go:89] found id: ""
	I0723 15:23:50.918996   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.919004   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:50.919010   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:50.919072   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:50.951244   65605 cri.go:89] found id: ""
	I0723 15:23:50.951277   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.951284   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:50.951290   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:50.951360   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:50.983289   65605 cri.go:89] found id: ""
	I0723 15:23:50.983326   65605 logs.go:276] 0 containers: []
	W0723 15:23:50.983334   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:50.983339   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:50.983392   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:51.019584   65605 cri.go:89] found id: ""
	I0723 15:23:51.019614   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.019624   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:51.019631   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:51.019693   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:51.050981   65605 cri.go:89] found id: ""
	I0723 15:23:51.051005   65605 logs.go:276] 0 containers: []
	W0723 15:23:51.051014   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:51.051023   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:51.051038   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:51.088826   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:51.088852   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:51.141369   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:51.141401   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:51.155419   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:51.155450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:51.222640   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:51.222662   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:51.222675   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:49.133154   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.632559   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:48.905876   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.404543   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:51.654814   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:54.153611   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.802706   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:53.815926   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:53.815985   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:53.847867   65605 cri.go:89] found id: ""
	I0723 15:23:53.847900   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.847913   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:53.847921   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:53.847981   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:53.881461   65605 cri.go:89] found id: ""
	I0723 15:23:53.881489   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.881499   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:53.881506   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:53.881569   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:53.921025   65605 cri.go:89] found id: ""
	I0723 15:23:53.921059   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.921070   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:53.921076   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:53.921135   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:53.955219   65605 cri.go:89] found id: ""
	I0723 15:23:53.955242   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.955250   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:53.955255   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:53.955318   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:53.991874   65605 cri.go:89] found id: ""
	I0723 15:23:53.991905   65605 logs.go:276] 0 containers: []
	W0723 15:23:53.991915   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:53.991922   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:53.991986   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:54.024702   65605 cri.go:89] found id: ""
	I0723 15:23:54.024735   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.024745   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:54.024752   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:54.024819   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:54.063778   65605 cri.go:89] found id: ""
	I0723 15:23:54.063801   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.063808   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:54.063813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:54.063861   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:54.098194   65605 cri.go:89] found id: ""
	I0723 15:23:54.098222   65605 logs.go:276] 0 containers: []
	W0723 15:23:54.098232   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:54.098244   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:54.098258   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:54.148576   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:54.148617   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:54.162561   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:54.162596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:54.236614   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:54.236647   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:54.236663   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:54.315900   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:54.315932   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:53.632910   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.633683   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:53.404873   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:55.904545   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:57.904874   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.153719   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:58.154355   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:56.853674   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:56.867190   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:56.867270   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:56.901757   65605 cri.go:89] found id: ""
	I0723 15:23:56.901782   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.901792   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:56.901799   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:56.901858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:56.943877   65605 cri.go:89] found id: ""
	I0723 15:23:56.943909   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.943920   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:56.943926   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:56.943983   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:56.977156   65605 cri.go:89] found id: ""
	I0723 15:23:56.977186   65605 logs.go:276] 0 containers: []
	W0723 15:23:56.977194   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:56.977200   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:56.977260   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:23:57.009251   65605 cri.go:89] found id: ""
	I0723 15:23:57.009280   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.009290   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:23:57.009297   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:23:57.009362   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:23:57.041196   65605 cri.go:89] found id: ""
	I0723 15:23:57.041225   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.041236   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:23:57.041243   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:23:57.041295   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:23:57.081725   65605 cri.go:89] found id: ""
	I0723 15:23:57.081752   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.081760   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:23:57.081765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:23:57.081810   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:23:57.114457   65605 cri.go:89] found id: ""
	I0723 15:23:57.114482   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.114490   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:23:57.114495   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:23:57.114551   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:23:57.149775   65605 cri.go:89] found id: ""
	I0723 15:23:57.149803   65605 logs.go:276] 0 containers: []
	W0723 15:23:57.149814   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:23:57.149824   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:23:57.149838   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:23:57.197984   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:23:57.198014   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:23:57.210717   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:23:57.210743   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:23:57.271374   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:23:57.271392   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:23:57.271403   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:23:57.346151   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:23:57.346185   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:59.882368   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:23:59.895184   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:23:59.895257   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:23:59.928859   65605 cri.go:89] found id: ""
	I0723 15:23:59.928891   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.928902   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:23:59.928909   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:23:59.928967   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:23:59.962441   65605 cri.go:89] found id: ""
	I0723 15:23:59.962472   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.962483   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:23:59.962491   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:23:59.962570   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:23:59.996637   65605 cri.go:89] found id: ""
	I0723 15:23:59.996659   65605 logs.go:276] 0 containers: []
	W0723 15:23:59.996667   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:23:59.996672   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:23:59.996720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:00.029291   65605 cri.go:89] found id: ""
	I0723 15:24:00.029320   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.029330   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:00.029338   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:00.029387   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:00.060869   65605 cri.go:89] found id: ""
	I0723 15:24:00.060898   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.060907   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:00.060912   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:00.060993   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:00.092010   65605 cri.go:89] found id: ""
	I0723 15:24:00.092042   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.092054   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:00.092063   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:00.092128   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:00.124914   65605 cri.go:89] found id: ""
	I0723 15:24:00.124940   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.124949   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:00.124955   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:00.125016   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:00.159927   65605 cri.go:89] found id: ""
	I0723 15:24:00.159953   65605 logs.go:276] 0 containers: []
	W0723 15:24:00.159962   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:00.159977   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:00.159993   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:00.209719   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:00.209764   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:00.224757   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:00.224784   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:00.292079   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:00.292100   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:00.292113   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:00.377382   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:00.377415   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:23:58.132374   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.133083   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:23:59.906087   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.404839   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:00.655745   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.658870   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.153217   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:02.916818   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:02.931524   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:02.931594   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:02.966440   65605 cri.go:89] found id: ""
	I0723 15:24:02.966462   65605 logs.go:276] 0 containers: []
	W0723 15:24:02.966470   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:02.966475   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:02.966525   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:03.000833   65605 cri.go:89] found id: ""
	I0723 15:24:03.000857   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.000865   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:03.000870   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:03.000918   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:03.035531   65605 cri.go:89] found id: ""
	I0723 15:24:03.035559   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.035570   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:03.035577   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:03.035636   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:03.068376   65605 cri.go:89] found id: ""
	I0723 15:24:03.068401   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.068411   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:03.068418   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:03.068479   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:03.102499   65605 cri.go:89] found id: ""
	I0723 15:24:03.102532   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.102543   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:03.102549   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:03.102600   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:03.137173   65605 cri.go:89] found id: ""
	I0723 15:24:03.137198   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.137207   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:03.137215   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:03.137259   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:03.170652   65605 cri.go:89] found id: ""
	I0723 15:24:03.170677   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.170685   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:03.170690   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:03.170748   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:03.204828   65605 cri.go:89] found id: ""
	I0723 15:24:03.204855   65605 logs.go:276] 0 containers: []
	W0723 15:24:03.204864   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:03.204875   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:03.204895   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:03.287370   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:03.287413   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:03.323855   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:03.323888   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:03.379809   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:03.379846   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:03.392944   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:03.392971   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:03.465681   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:05.966635   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:05.979888   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:05.979949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:06.013706   65605 cri.go:89] found id: ""
	I0723 15:24:06.013733   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.013740   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:06.013746   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:06.013794   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:06.046584   65605 cri.go:89] found id: ""
	I0723 15:24:06.046612   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.046622   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:06.046630   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:06.046690   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:06.077379   65605 cri.go:89] found id: ""
	I0723 15:24:06.077407   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.077416   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:06.077422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:06.077488   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:06.108946   65605 cri.go:89] found id: ""
	I0723 15:24:06.108975   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.108986   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:06.108993   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:06.109058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:06.143082   65605 cri.go:89] found id: ""
	I0723 15:24:06.143115   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.143123   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:06.143129   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:06.143178   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:06.182735   65605 cri.go:89] found id: ""
	I0723 15:24:06.182762   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.182772   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:06.182779   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:06.182839   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:06.217613   65605 cri.go:89] found id: ""
	I0723 15:24:06.217640   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.217650   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:06.217657   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:06.217720   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:06.252739   65605 cri.go:89] found id: ""
	I0723 15:24:06.252775   65605 logs.go:276] 0 containers: []
	W0723 15:24:06.252787   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:06.252800   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:06.252814   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:06.304325   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:06.304358   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:06.317426   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:06.317450   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:06.384284   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:06.384313   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:06.384329   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:06.460936   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:06.460974   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:02.632839   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:05.132547   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:04.404942   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:06.406131   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:07.153476   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.154627   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.000304   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:09.013544   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:09.013618   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:09.046414   65605 cri.go:89] found id: ""
	I0723 15:24:09.046442   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.046452   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:09.046459   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:09.046522   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:09.083183   65605 cri.go:89] found id: ""
	I0723 15:24:09.083214   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.083225   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:09.083231   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:09.083292   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:09.117524   65605 cri.go:89] found id: ""
	I0723 15:24:09.117568   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.117578   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:09.117585   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:09.117647   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:09.152624   65605 cri.go:89] found id: ""
	I0723 15:24:09.152652   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.152667   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:09.152674   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:09.152735   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:09.186918   65605 cri.go:89] found id: ""
	I0723 15:24:09.186943   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.186951   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:09.186957   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:09.187017   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:09.219857   65605 cri.go:89] found id: ""
	I0723 15:24:09.219889   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.219909   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:09.219917   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:09.219980   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:09.253364   65605 cri.go:89] found id: ""
	I0723 15:24:09.253392   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.253402   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:09.253409   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:09.253469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:09.285049   65605 cri.go:89] found id: ""
	I0723 15:24:09.285072   65605 logs.go:276] 0 containers: []
	W0723 15:24:09.285079   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:09.285088   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:09.285099   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:09.336011   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:09.336046   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:09.349643   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:09.349672   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:09.428156   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:09.428181   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:09.428200   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:09.513917   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:09.513977   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:07.632840   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:09.636373   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:08.904674   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.405130   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:11.653749   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.153549   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:12.053554   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:12.067177   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:12.067242   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:12.097265   65605 cri.go:89] found id: ""
	I0723 15:24:12.097289   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.097298   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:12.097305   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:12.097378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:12.129832   65605 cri.go:89] found id: ""
	I0723 15:24:12.129858   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.129868   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:12.129876   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:12.129938   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:12.164173   65605 cri.go:89] found id: ""
	I0723 15:24:12.164202   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.164213   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:12.164221   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:12.164275   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:12.196604   65605 cri.go:89] found id: ""
	I0723 15:24:12.196637   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.196648   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:12.196655   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:12.196725   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:12.239120   65605 cri.go:89] found id: ""
	I0723 15:24:12.239149   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.239158   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:12.239164   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:12.239232   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:12.273806   65605 cri.go:89] found id: ""
	I0723 15:24:12.273836   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.273847   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:12.273855   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:12.273908   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:12.305937   65605 cri.go:89] found id: ""
	I0723 15:24:12.305965   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.305976   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:12.305984   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:12.306045   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:12.337795   65605 cri.go:89] found id: ""
	I0723 15:24:12.337822   65605 logs.go:276] 0 containers: []
	W0723 15:24:12.337830   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:12.337839   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:12.337850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:12.390476   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:12.390512   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:12.405397   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:12.405422   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:12.474687   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:12.474711   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:12.474730   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:12.551302   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:12.551341   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:15.094530   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:15.108194   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:15.108267   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:15.141068   65605 cri.go:89] found id: ""
	I0723 15:24:15.141095   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.141103   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:15.141109   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:15.141167   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:15.176226   65605 cri.go:89] found id: ""
	I0723 15:24:15.176260   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.176276   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:15.176284   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:15.176348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:15.209086   65605 cri.go:89] found id: ""
	I0723 15:24:15.209115   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.209123   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:15.209128   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:15.209175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:15.245808   65605 cri.go:89] found id: ""
	I0723 15:24:15.245842   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.245853   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:15.245863   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:15.245926   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:15.277680   65605 cri.go:89] found id: ""
	I0723 15:24:15.277710   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.277720   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:15.277728   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:15.277789   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:15.308419   65605 cri.go:89] found id: ""
	I0723 15:24:15.308443   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.308450   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:15.308456   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:15.308515   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:15.340785   65605 cri.go:89] found id: ""
	I0723 15:24:15.340812   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.340820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:15.340825   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:15.340871   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:15.376014   65605 cri.go:89] found id: ""
	I0723 15:24:15.376040   65605 logs.go:276] 0 containers: []
	W0723 15:24:15.376050   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:15.376061   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:15.376074   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:15.427672   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:15.427706   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:15.441726   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:15.441755   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:15.508628   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:15.508659   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:15.508674   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:15.589246   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:15.589284   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:12.133283   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:14.632399   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:13.905548   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.405913   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:16.652810   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.653725   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.128036   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:18.141529   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:18.141604   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:18.176401   65605 cri.go:89] found id: ""
	I0723 15:24:18.176434   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.176446   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:18.176453   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:18.176507   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:18.209833   65605 cri.go:89] found id: ""
	I0723 15:24:18.209868   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.209878   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:18.209886   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:18.209949   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:18.243094   65605 cri.go:89] found id: ""
	I0723 15:24:18.243129   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.243139   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:18.243146   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:18.243211   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:18.275929   65605 cri.go:89] found id: ""
	I0723 15:24:18.275957   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.275968   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:18.275980   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:18.276037   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:18.309064   65605 cri.go:89] found id: ""
	I0723 15:24:18.309095   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.309103   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:18.309109   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:18.309171   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:18.345446   65605 cri.go:89] found id: ""
	I0723 15:24:18.345475   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.345485   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:18.345491   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:18.345552   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:18.381774   65605 cri.go:89] found id: ""
	I0723 15:24:18.381808   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.381820   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:18.381827   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:18.381881   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:18.435663   65605 cri.go:89] found id: ""
	I0723 15:24:18.435692   65605 logs.go:276] 0 containers: []
	W0723 15:24:18.435706   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:18.435716   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:18.435729   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:18.471152   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:18.471184   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:18.523114   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:18.523146   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:18.536555   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:18.536594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:18.607773   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:18.607792   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:18.607803   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.192781   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:21.205337   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:21.205403   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:21.242125   65605 cri.go:89] found id: ""
	I0723 15:24:21.242155   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.242163   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:21.242170   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:21.242243   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:21.279245   65605 cri.go:89] found id: ""
	I0723 15:24:21.279274   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.279286   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:21.279295   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:21.279361   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:21.311316   65605 cri.go:89] found id: ""
	I0723 15:24:21.311340   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.311348   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:21.311355   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:21.311415   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:21.344444   65605 cri.go:89] found id: ""
	I0723 15:24:21.344468   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.344478   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:21.344485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:21.344545   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:21.381055   65605 cri.go:89] found id: ""
	I0723 15:24:21.381082   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.381092   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:21.381099   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:21.381158   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:21.416593   65605 cri.go:89] found id: ""
	I0723 15:24:21.416621   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.416633   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:21.416643   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:21.416706   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:21.448345   65605 cri.go:89] found id: ""
	I0723 15:24:21.448368   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.448377   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:21.448382   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:21.448426   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:21.481810   65605 cri.go:89] found id: ""
	I0723 15:24:21.481836   65605 logs.go:276] 0 containers: []
	W0723 15:24:21.481843   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:21.481852   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:21.481874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:21.545200   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:21.545227   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:21.545244   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:21.626037   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:21.626073   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:21.667961   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:21.667998   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:21.718622   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:21.718662   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:17.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:19.632774   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.632954   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:18.905257   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:20.906323   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:21.153330   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.153495   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:24.233086   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:24.247111   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:24.247175   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:24.281818   65605 cri.go:89] found id: ""
	I0723 15:24:24.281850   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.281861   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:24.281868   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:24.281924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:24.315621   65605 cri.go:89] found id: ""
	I0723 15:24:24.315647   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.315656   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:24.315664   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:24.315722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:24.350355   65605 cri.go:89] found id: ""
	I0723 15:24:24.350400   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.350410   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:24.350417   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:24.350498   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:24.384584   65605 cri.go:89] found id: ""
	I0723 15:24:24.384611   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.384619   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:24.384625   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:24.384671   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:24.423669   65605 cri.go:89] found id: ""
	I0723 15:24:24.423694   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.423701   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:24.423707   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:24.423754   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:24.456572   65605 cri.go:89] found id: ""
	I0723 15:24:24.456599   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.456606   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:24.456611   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:24.456659   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:24.488024   65605 cri.go:89] found id: ""
	I0723 15:24:24.488047   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.488055   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:24.488061   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:24.488109   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:24.519311   65605 cri.go:89] found id: ""
	I0723 15:24:24.519344   65605 logs.go:276] 0 containers: []
	W0723 15:24:24.519352   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:24.519360   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:24.519371   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:24.568552   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:24.568594   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:24.581845   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:24.581874   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:24.650455   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:24.650478   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:24.650492   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:24.728143   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:24.728179   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:23.633012   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:26.132417   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:23.405046   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.906015   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:25.653352   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.654555   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.152778   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:27.268112   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:27.281947   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:27.282025   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:27.315489   65605 cri.go:89] found id: ""
	I0723 15:24:27.315517   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.315528   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:27.315536   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:27.315599   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:27.348481   65605 cri.go:89] found id: ""
	I0723 15:24:27.348509   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.348519   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:27.348526   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:27.348580   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:27.380628   65605 cri.go:89] found id: ""
	I0723 15:24:27.380659   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.380668   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:27.380673   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:27.380731   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:27.413647   65605 cri.go:89] found id: ""
	I0723 15:24:27.413679   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.413688   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:27.413693   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:27.413744   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:27.450398   65605 cri.go:89] found id: ""
	I0723 15:24:27.450425   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.450436   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:27.450442   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:27.450494   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:27.489071   65605 cri.go:89] found id: ""
	I0723 15:24:27.489101   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.489117   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:27.489125   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:27.489190   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:27.529785   65605 cri.go:89] found id: ""
	I0723 15:24:27.529813   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.529823   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:27.529829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:27.529876   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:27.560811   65605 cri.go:89] found id: ""
	I0723 15:24:27.560843   65605 logs.go:276] 0 containers: []
	W0723 15:24:27.560855   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:27.560866   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:27.560882   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:27.574078   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:27.574100   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:27.636153   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:27.636179   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:27.636194   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:27.714001   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:27.714041   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:27.751396   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:27.751428   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.307581   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:30.319762   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:30.319823   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:30.354317   65605 cri.go:89] found id: ""
	I0723 15:24:30.354341   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.354349   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:30.354355   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:30.354429   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:30.389994   65605 cri.go:89] found id: ""
	I0723 15:24:30.390026   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.390039   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:30.390048   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:30.390122   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:30.428854   65605 cri.go:89] found id: ""
	I0723 15:24:30.428878   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.428887   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:30.428893   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:30.428966   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:30.461727   65605 cri.go:89] found id: ""
	I0723 15:24:30.461752   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.461759   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:30.461765   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:30.461813   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:30.494777   65605 cri.go:89] found id: ""
	I0723 15:24:30.494799   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.494807   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:30.494813   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:30.494858   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:30.531918   65605 cri.go:89] found id: ""
	I0723 15:24:30.531943   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.531954   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:30.531960   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:30.532034   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:30.590683   65605 cri.go:89] found id: ""
	I0723 15:24:30.590710   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.590720   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:30.590727   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:30.590772   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:30.636073   65605 cri.go:89] found id: ""
	I0723 15:24:30.636104   65605 logs.go:276] 0 containers: []
	W0723 15:24:30.636114   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:30.636124   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:30.636138   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:30.686233   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:30.686268   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:30.700266   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:30.700308   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:30.773850   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:30.773868   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:30.773879   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:30.854428   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:30.854464   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:28.633061   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.633604   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:28.404488   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:30.406038   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.905405   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:32.653390   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.153739   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:33.393374   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:33.406722   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:33.406779   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:33.440555   65605 cri.go:89] found id: ""
	I0723 15:24:33.440585   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.440596   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:33.440604   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:33.440666   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:33.473363   65605 cri.go:89] found id: ""
	I0723 15:24:33.473389   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.473398   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:33.473405   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:33.473469   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:33.509772   65605 cri.go:89] found id: ""
	I0723 15:24:33.509805   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.509816   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:33.509829   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:33.509896   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:33.546578   65605 cri.go:89] found id: ""
	I0723 15:24:33.546605   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.546613   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:33.546618   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:33.546686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:33.582735   65605 cri.go:89] found id: ""
	I0723 15:24:33.582759   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.582766   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:33.582771   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:33.582831   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:33.619013   65605 cri.go:89] found id: ""
	I0723 15:24:33.619039   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.619048   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:33.619053   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:33.619110   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:33.655967   65605 cri.go:89] found id: ""
	I0723 15:24:33.655988   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.655995   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:33.656001   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:33.656058   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:33.694266   65605 cri.go:89] found id: ""
	I0723 15:24:33.694303   65605 logs.go:276] 0 containers: []
	W0723 15:24:33.694311   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:33.694319   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:33.694330   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:33.744464   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:33.744504   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:33.759314   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:33.759342   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:33.832308   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:33.832331   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:33.832364   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:33.910820   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:33.910860   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.452804   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:36.465137   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:36.465224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:36.504340   65605 cri.go:89] found id: ""
	I0723 15:24:36.504371   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.504380   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:36.504385   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:36.504436   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:36.539113   65605 cri.go:89] found id: ""
	I0723 15:24:36.539138   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.539147   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:36.539154   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:36.539215   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:36.572443   65605 cri.go:89] found id: ""
	I0723 15:24:36.572468   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.572478   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:36.572485   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:36.572540   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:36.605366   65605 cri.go:89] found id: ""
	I0723 15:24:36.605391   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.605398   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:36.605404   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:36.605467   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:36.637467   65605 cri.go:89] found id: ""
	I0723 15:24:36.637496   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.637506   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:36.637513   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:36.637576   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:36.674630   65605 cri.go:89] found id: ""
	I0723 15:24:36.674652   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.674661   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:36.674669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:36.674722   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:36.707409   65605 cri.go:89] found id: ""
	I0723 15:24:36.707500   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.707511   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:36.707525   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:36.707581   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:36.742746   65605 cri.go:89] found id: ""
	I0723 15:24:36.742771   65605 logs.go:276] 0 containers: []
	W0723 15:24:36.742778   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:36.742786   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:36.742800   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:36.776474   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:36.776498   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:36.826256   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:36.826289   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:36.839568   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:36.839596   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:24:33.132552   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.632486   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:35.405071   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.406177   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:37.653785   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.654028   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	W0723 15:24:36.906055   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:36.906082   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:36.906095   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:39.483791   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:39.496085   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:39.496150   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:39.527545   65605 cri.go:89] found id: ""
	I0723 15:24:39.527573   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.527583   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:39.527590   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:39.527653   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:39.562024   65605 cri.go:89] found id: ""
	I0723 15:24:39.562051   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.562060   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:39.562066   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:39.562115   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:39.600294   65605 cri.go:89] found id: ""
	I0723 15:24:39.600317   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.600324   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:39.600329   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:39.600378   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:39.635629   65605 cri.go:89] found id: ""
	I0723 15:24:39.635653   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.635663   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:39.635669   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:39.635729   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:39.672815   65605 cri.go:89] found id: ""
	I0723 15:24:39.672843   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.672854   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:39.672861   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:39.672924   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:39.705965   65605 cri.go:89] found id: ""
	I0723 15:24:39.705999   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.706009   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:39.706023   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:39.706077   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:39.739262   65605 cri.go:89] found id: ""
	I0723 15:24:39.739288   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.739298   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:39.739304   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:39.739373   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:39.771786   65605 cri.go:89] found id: ""
	I0723 15:24:39.771811   65605 logs.go:276] 0 containers: []
	W0723 15:24:39.771820   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:39.771831   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:39.771844   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:39.813596   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:39.813628   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:39.861596   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:39.861629   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:39.875843   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:39.875867   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:39.947917   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:39.947941   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:39.947958   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:38.135033   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:40.633462   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:39.906043   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.404845   65177 pod_ready.go:102] pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.153505   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:44.154094   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:42.530636   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:42.543636   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:42.543718   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:42.576613   65605 cri.go:89] found id: ""
	I0723 15:24:42.576642   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.576652   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:42.576659   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:42.576723   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:42.611422   65605 cri.go:89] found id: ""
	I0723 15:24:42.611452   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.611460   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:42.611465   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:42.611514   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:42.647346   65605 cri.go:89] found id: ""
	I0723 15:24:42.647370   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.647380   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:42.647386   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:42.647447   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:42.683587   65605 cri.go:89] found id: ""
	I0723 15:24:42.683614   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.683622   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:42.683627   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:42.683673   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:42.715688   65605 cri.go:89] found id: ""
	I0723 15:24:42.715709   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.715717   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:42.715723   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:42.715775   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:42.749589   65605 cri.go:89] found id: ""
	I0723 15:24:42.749624   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.749632   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:42.749637   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:42.749684   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:42.786668   65605 cri.go:89] found id: ""
	I0723 15:24:42.786694   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.786702   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:42.786708   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:42.786757   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:42.821541   65605 cri.go:89] found id: ""
	I0723 15:24:42.821574   65605 logs.go:276] 0 containers: []
	W0723 15:24:42.821585   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:42.821597   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:42.821612   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:42.873689   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:42.873720   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:42.886689   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:42.886719   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:42.958057   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:42.958078   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:42.958093   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:43.042738   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:43.042771   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:45.580764   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:45.593331   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:45.593402   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:45.632356   65605 cri.go:89] found id: ""
	I0723 15:24:45.632386   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.632397   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:45.632404   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:45.632460   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:45.674319   65605 cri.go:89] found id: ""
	I0723 15:24:45.674353   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.674363   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:45.674371   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:45.674450   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:45.718577   65605 cri.go:89] found id: ""
	I0723 15:24:45.718608   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.718616   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:45.718622   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:45.718686   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:45.758866   65605 cri.go:89] found id: ""
	I0723 15:24:45.758894   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.758901   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:45.758907   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:45.758954   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:45.795098   65605 cri.go:89] found id: ""
	I0723 15:24:45.795124   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.795134   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:45.795148   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:45.795224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:45.832205   65605 cri.go:89] found id: ""
	I0723 15:24:45.832236   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.832257   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:45.832266   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:45.832348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:45.867679   65605 cri.go:89] found id: ""
	I0723 15:24:45.867713   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.867725   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:45.867733   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:45.867799   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:45.904960   65605 cri.go:89] found id: ""
	I0723 15:24:45.904999   65605 logs.go:276] 0 containers: []
	W0723 15:24:45.905010   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:45.905022   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:45.905036   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:45.962373   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:45.962434   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:45.978670   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:45.978715   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:46.050765   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:46.050795   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:46.050811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:46.145347   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:46.145387   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:43.132518   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:45.133735   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:43.399717   65177 pod_ready.go:81] duration metric: took 4m0.000898156s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" ...
	E0723 15:24:43.399747   65177 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-rq67z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0723 15:24:43.399766   65177 pod_ready.go:38] duration metric: took 4m8.000231971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:24:43.399796   65177 kubeadm.go:597] duration metric: took 4m15.901150134s to restartPrimaryControlPlane
	W0723 15:24:43.399891   65177 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:43.399930   65177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:46.154147   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.653381   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:48.691420   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:48.704605   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:48.704662   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:48.736998   65605 cri.go:89] found id: ""
	I0723 15:24:48.737030   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.737040   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:48.737048   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:48.737116   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:48.770428   65605 cri.go:89] found id: ""
	I0723 15:24:48.770456   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.770466   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:48.770474   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:48.770534   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:48.804036   65605 cri.go:89] found id: ""
	I0723 15:24:48.804063   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.804073   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:48.804080   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:48.804140   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:48.841221   65605 cri.go:89] found id: ""
	I0723 15:24:48.841247   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.841256   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:48.841263   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:48.841345   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:48.877239   65605 cri.go:89] found id: ""
	I0723 15:24:48.877269   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.877280   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:48.877288   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:48.877348   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:48.910120   65605 cri.go:89] found id: ""
	I0723 15:24:48.910144   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.910153   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:48.910161   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:48.910222   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:48.944831   65605 cri.go:89] found id: ""
	I0723 15:24:48.944861   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.944872   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:48.944881   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:48.944936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:48.978782   65605 cri.go:89] found id: ""
	I0723 15:24:48.978811   65605 logs.go:276] 0 containers: []
	W0723 15:24:48.978821   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:48.978832   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:48.978850   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:49.031863   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:49.031900   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:49.045173   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:49.045196   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:49.115607   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:49.115632   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:49.115644   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:49.195137   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:49.195186   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:51.732915   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:51.746885   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:24:51.746970   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:24:51.787857   65605 cri.go:89] found id: ""
	I0723 15:24:51.787878   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.787885   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:24:51.787890   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:24:51.787933   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:24:51.826515   65605 cri.go:89] found id: ""
	I0723 15:24:51.826537   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.826545   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:24:51.826550   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:24:51.826611   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:24:47.634980   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:50.132905   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.153224   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:53.153400   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:51.863825   65605 cri.go:89] found id: ""
	I0723 15:24:51.863867   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.863878   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:24:51.863884   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:24:51.863936   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:24:51.901367   65605 cri.go:89] found id: ""
	I0723 15:24:51.901403   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.901414   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:24:51.901422   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:24:51.901474   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:24:51.933270   65605 cri.go:89] found id: ""
	I0723 15:24:51.933303   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.933314   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:24:51.933321   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:24:51.933385   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:24:51.965174   65605 cri.go:89] found id: ""
	I0723 15:24:51.965205   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.965217   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:24:51.965227   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:24:51.965296   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:24:51.999785   65605 cri.go:89] found id: ""
	I0723 15:24:51.999812   65605 logs.go:276] 0 containers: []
	W0723 15:24:51.999822   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:24:51.999841   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:24:51.999914   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:24:52.035592   65605 cri.go:89] found id: ""
	I0723 15:24:52.035619   65605 logs.go:276] 0 containers: []
	W0723 15:24:52.035630   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:24:52.035641   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:24:52.035656   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:24:52.048683   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:24:52.048711   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:24:52.112319   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0723 15:24:52.112338   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:24:52.112351   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:24:52.196596   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:24:52.196632   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:24:52.235608   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:24:52.235635   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:24:54.786414   65605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:24:54.799864   65605 kubeadm.go:597] duration metric: took 4m4.703331486s to restartPrimaryControlPlane
	W0723 15:24:54.799946   65605 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0723 15:24:54.799996   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:24:52.134857   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:54.633070   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:55.653385   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.154569   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:58.675405   65605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.875388525s)
	I0723 15:24:58.675461   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:24:58.689878   65605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:24:58.699568   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:24:58.708541   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:24:58.708559   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:24:58.708604   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:24:58.717055   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:24:58.717108   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:24:58.725736   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:24:58.734127   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:24:58.734227   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:24:58.742862   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.750696   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:24:58.750747   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:24:58.759235   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:24:58.768036   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:24:58.768094   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:24:58.777299   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:24:58.976177   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:24:57.133412   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:24:59.633162   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:00.652486   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.653128   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.654556   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:02.132762   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:04.134714   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:06.632391   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:07.152861   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:09.153443   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:08.633329   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.133963   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:11.652964   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:13.653225   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:14.921745   65177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.521789017s)
	I0723 15:25:14.921814   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:14.937627   65177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 15:25:14.948238   65177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:25:14.958145   65177 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:25:14.958171   65177 kubeadm.go:157] found existing configuration files:
	
	I0723 15:25:14.958223   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:25:14.967224   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:25:14.967282   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:25:14.975995   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:25:14.984981   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:25:14.985040   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:25:14.993733   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.002214   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:25:15.002265   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:25:15.012952   65177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:25:15.022716   65177 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:25:15.022775   65177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:25:15.032954   65177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:25:15.081347   65177 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 15:25:15.081412   65177 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:25:15.217189   65177 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:25:15.217316   65177 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:25:15.217421   65177 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:25:15.414012   65177 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:25:15.415975   65177 out.go:204]   - Generating certificates and keys ...
	I0723 15:25:15.416086   65177 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:25:15.416172   65177 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:25:15.416284   65177 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:25:15.416378   65177 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:25:15.416512   65177 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:25:15.416600   65177 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:25:15.416690   65177 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:25:15.416781   65177 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:25:15.416901   65177 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:25:15.417027   65177 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:25:15.417091   65177 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:25:15.417169   65177 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:25:15.577526   65177 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:25:15.771865   65177 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 15:25:15.968841   65177 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:25:16.376626   65177 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:25:16.569425   65177 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:25:16.570004   65177 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:25:16.572623   65177 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:25:13.633779   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.133051   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:16.574399   65177 out.go:204]   - Booting up control plane ...
	I0723 15:25:16.574516   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:25:16.574622   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:25:16.575046   65177 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:25:16.594177   65177 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:25:16.595205   65177 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:25:16.595310   65177 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:25:16.739893   65177 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 15:25:16.740022   65177 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 15:25:17.242030   65177 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.858581ms
	I0723 15:25:17.242119   65177 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 15:25:15.653757   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.153924   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:20.154226   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:18.634047   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:21.132773   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:22.244539   65177 kubeadm.go:310] [api-check] The API server is healthy after 5.002291296s
	I0723 15:25:22.260367   65177 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 15:25:22.272659   65177 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 15:25:22.304686   65177 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 15:25:22.304939   65177 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-486436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 15:25:22.318299   65177 kubeadm.go:310] [bootstrap-token] Using token: 1476j9.4ihrwdjbg4aq5odf
	I0723 15:25:22.319736   65177 out.go:204]   - Configuring RBAC rules ...
	I0723 15:25:22.319899   65177 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 15:25:22.329081   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 15:25:22.340687   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 15:25:22.344962   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 15:25:22.348526   65177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 15:25:22.355955   65177 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 15:25:22.652467   65177 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 15:25:23.122105   65177 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 15:25:23.653074   65177 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 15:25:23.654335   65177 kubeadm.go:310] 
	I0723 15:25:23.654448   65177 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 15:25:23.654461   65177 kubeadm.go:310] 
	I0723 15:25:23.654580   65177 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 15:25:23.654599   65177 kubeadm.go:310] 
	I0723 15:25:23.654648   65177 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 15:25:23.654721   65177 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 15:25:23.654796   65177 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 15:25:23.654821   65177 kubeadm.go:310] 
	I0723 15:25:23.654902   65177 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 15:25:23.654925   65177 kubeadm.go:310] 
	I0723 15:25:23.655000   65177 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 15:25:23.655010   65177 kubeadm.go:310] 
	I0723 15:25:23.655076   65177 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 15:25:23.655174   65177 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 15:25:23.655256   65177 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 15:25:23.655264   65177 kubeadm.go:310] 
	I0723 15:25:23.655352   65177 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 15:25:23.655440   65177 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 15:25:23.655459   65177 kubeadm.go:310] 
	I0723 15:25:23.655579   65177 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.655719   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 \
	I0723 15:25:23.655752   65177 kubeadm.go:310] 	--control-plane 
	I0723 15:25:23.655771   65177 kubeadm.go:310] 
	I0723 15:25:23.655896   65177 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 15:25:23.655904   65177 kubeadm.go:310] 
	I0723 15:25:23.656005   65177 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1476j9.4ihrwdjbg4aq5odf \
	I0723 15:25:23.656141   65177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d95cc724c89ec5c349839c3cbe061d4a592a7033837a69e322f147164207231 
	I0723 15:25:23.656644   65177 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:25:23.656674   65177 cni.go:84] Creating CNI manager for ""
	I0723 15:25:23.656686   65177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 15:25:23.659688   65177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0723 15:25:22.653874   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:24.654172   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.133652   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:25.633189   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:23.660997   65177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0723 15:25:23.671788   65177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0723 15:25:23.692109   65177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 15:25:23.692195   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:23.692199   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-486436 minikube.k8s.io/updated_at=2024_07_23T15_25_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=embed-certs-486436 minikube.k8s.io/primary=true
	I0723 15:25:23.716101   65177 ops.go:34] apiserver oom_adj: -16
	I0723 15:25:23.905952   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.405980   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:24.906787   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.406096   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:25.906365   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.406501   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:26.906068   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.406018   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.907033   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:27.153085   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.653377   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:27.633816   66641 pod_ready.go:102] pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:29.133531   66641 pod_ready.go:81] duration metric: took 4m0.007080073s for pod "metrics-server-569cc877fc-mkl8l" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:29.133554   66641 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:29.133561   66641 pod_ready.go:38] duration metric: took 4m4.545428088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:29.133577   66641 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:29.133601   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:29.133646   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:29.179796   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:29.179818   66641 cri.go:89] found id: ""
	I0723 15:25:29.179830   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:29.179882   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.184024   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:29.184095   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:29.219711   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:29.219740   66641 cri.go:89] found id: ""
	I0723 15:25:29.219749   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:29.219814   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.223687   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:29.223761   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:29.258473   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:29.258498   66641 cri.go:89] found id: ""
	I0723 15:25:29.258508   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:29.258556   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.262789   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:29.262857   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:29.304206   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.304233   66641 cri.go:89] found id: ""
	I0723 15:25:29.304242   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:29.304306   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.309658   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:29.309735   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:29.361664   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:29.361690   66641 cri.go:89] found id: ""
	I0723 15:25:29.361699   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:29.361758   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.366171   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:29.366248   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:29.414069   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:29.414094   66641 cri.go:89] found id: ""
	I0723 15:25:29.414104   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:29.414162   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.419607   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:29.419678   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:29.464533   66641 cri.go:89] found id: ""
	I0723 15:25:29.464563   66641 logs.go:276] 0 containers: []
	W0723 15:25:29.464573   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:29.464580   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:29.464640   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:29.499966   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:29.499991   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:29.499996   66641 cri.go:89] found id: ""
	I0723 15:25:29.500006   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:29.500063   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.503961   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:29.508088   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:29.508109   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:29.653373   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:29.653403   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:29.694171   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:29.694205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:30.262503   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:30.262559   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:30.304038   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:30.304070   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:30.357964   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:30.358013   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:30.372263   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:30.372296   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:30.418543   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:30.418583   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:30.470018   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:30.470050   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:30.503538   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:30.503579   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:30.538515   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:30.538554   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:30.599104   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:30.599137   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:30.635841   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:30.635867   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:28.406535   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:28.906729   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.406804   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:29.906364   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.406245   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:30.906646   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.406143   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.906645   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.406411   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:32.906643   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:31.653490   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.654773   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:33.406893   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:33.906016   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.406827   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:34.906668   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.406337   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:35.906162   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.406864   65177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 15:25:36.502155   65177 kubeadm.go:1113] duration metric: took 12.810025657s to wait for elevateKubeSystemPrivileges
	I0723 15:25:36.502200   65177 kubeadm.go:394] duration metric: took 5m9.050239878s to StartCluster
	I0723 15:25:36.502225   65177 settings.go:142] acquiring lock: {Name:mk4523377973c43c4fcd6af6d81d5e82f58ed8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.502332   65177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:25:36.504959   65177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-11303/kubeconfig: {Name:mk88cbd9d06449f117a02cad577835de850199a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:25:36.505284   65177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 15:25:36.505373   65177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0723 15:25:36.505452   65177 config.go:182] Loaded profile config "embed-certs-486436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:25:36.505461   65177 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-486436"
	I0723 15:25:36.505486   65177 addons.go:69] Setting metrics-server=true in profile "embed-certs-486436"
	I0723 15:25:36.505494   65177 addons.go:69] Setting default-storageclass=true in profile "embed-certs-486436"
	I0723 15:25:36.505509   65177 addons.go:234] Setting addon metrics-server=true in "embed-certs-486436"
	W0723 15:25:36.505518   65177 addons.go:243] addon metrics-server should already be in state true
	I0723 15:25:36.505535   65177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-486436"
	I0723 15:25:36.505541   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505487   65177 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-486436"
	W0723 15:25:36.505635   65177 addons.go:243] addon storage-provisioner should already be in state true
	I0723 15:25:36.505652   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.505919   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505938   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505950   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.505959   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.505987   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.506050   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.507034   65177 out.go:177] * Verifying Kubernetes components...
	I0723 15:25:36.508493   65177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:25:36.521500   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0723 15:25:36.521508   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0723 15:25:36.521836   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0723 15:25:36.522060   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522168   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522198   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.522626   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522674   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522696   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522710   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.522713   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.522724   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.523009   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523043   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523309   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.523454   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.523518   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523542   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.523629   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.523665   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.527348   65177 addons.go:234] Setting addon default-storageclass=true in "embed-certs-486436"
	W0723 15:25:36.527370   65177 addons.go:243] addon default-storageclass should already be in state true
	I0723 15:25:36.527399   65177 host.go:66] Checking if "embed-certs-486436" exists ...
	I0723 15:25:36.527752   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.527784   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.540037   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0723 15:25:36.540208   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0723 15:25:36.540572   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.540689   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.541105   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541113   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.541122   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541123   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.541455   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541454   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.541618   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.541686   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.543525   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.543999   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.545455   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0723 15:25:36.545800   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.545846   65177 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0723 15:25:36.545906   65177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 15:25:33.172857   66641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:33.188951   66641 api_server.go:72] duration metric: took 4m16.32591009s to wait for apiserver process to appear ...
	I0723 15:25:33.188979   66641 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:33.189022   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:33.189077   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:33.228175   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.228204   66641 cri.go:89] found id: ""
	I0723 15:25:33.228213   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:33.228271   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.232451   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:33.232518   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:33.268343   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.268362   66641 cri.go:89] found id: ""
	I0723 15:25:33.268371   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:33.268426   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.272333   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:33.272388   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:33.305913   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.305936   66641 cri.go:89] found id: ""
	I0723 15:25:33.305945   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:33.305998   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.310500   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:33.310573   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:33.345773   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.345798   66641 cri.go:89] found id: ""
	I0723 15:25:33.345807   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:33.345872   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.350031   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:33.350084   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:33.383305   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.383331   66641 cri.go:89] found id: ""
	I0723 15:25:33.383341   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:33.383399   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.387279   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:33.387331   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:33.428442   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.428468   66641 cri.go:89] found id: ""
	I0723 15:25:33.428478   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:33.428676   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.432814   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:33.432879   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:33.469064   66641 cri.go:89] found id: ""
	I0723 15:25:33.469093   66641 logs.go:276] 0 containers: []
	W0723 15:25:33.469105   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:33.469112   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:33.469164   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:33.509131   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:33.509161   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.509168   66641 cri.go:89] found id: ""
	I0723 15:25:33.509177   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:33.509240   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.513478   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:33.517125   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:33.517152   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:33.554974   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:33.555004   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:33.606042   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:33.606074   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:33.648068   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:33.648100   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:33.698660   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:33.698690   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:33.797480   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:33.797508   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:33.812119   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:33.812146   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:33.863628   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:33.863661   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:33.913667   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:33.913695   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:33.949115   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:33.949144   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:33.988180   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:33.988205   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:34.023679   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:34.023705   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:34.481829   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:34.481886   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:36.546218   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.546238   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.546607   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.547165   65177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 15:25:36.547209   65177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 15:25:36.547534   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 15:25:36.547548   65177 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 15:25:36.547565   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.547735   65177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.547752   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 15:25:36.547771   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.551130   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551764   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551767   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.551800   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551819   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.551844   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.551871   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.552160   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.552187   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.552429   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552608   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.552606   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.552797   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.567445   65177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0723 15:25:36.567912   65177 main.go:141] libmachine: () Calling .GetVersion
	I0723 15:25:36.568411   65177 main.go:141] libmachine: Using API Version  1
	I0723 15:25:36.568432   65177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 15:25:36.568752   65177 main.go:141] libmachine: () Calling .GetMachineName
	I0723 15:25:36.568949   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetState
	I0723 15:25:36.570216   65177 main.go:141] libmachine: (embed-certs-486436) Calling .DriverName
	I0723 15:25:36.570524   65177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.570580   65177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 15:25:36.570620   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHHostname
	I0723 15:25:36.572949   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573375   65177 main.go:141] libmachine: (embed-certs-486436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:49:db", ip: ""} in network mk-embed-certs-486436: {Iface:virbr1 ExpiryTime:2024-07-23 16:20:12 +0000 UTC Type:0 Mac:52:54:00:2e:49:db Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-486436 Clientid:01:52:54:00:2e:49:db}
	I0723 15:25:36.573402   65177 main.go:141] libmachine: (embed-certs-486436) DBG | domain embed-certs-486436 has defined IP address 192.168.39.200 and MAC address 52:54:00:2e:49:db in network mk-embed-certs-486436
	I0723 15:25:36.573509   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHPort
	I0723 15:25:36.573658   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHKeyPath
	I0723 15:25:36.573787   65177 main.go:141] libmachine: (embed-certs-486436) Calling .GetSSHUsername
	I0723 15:25:36.573918   65177 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/embed-certs-486436/id_rsa Username:docker}
	I0723 15:25:36.722640   65177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:25:36.756372   65177 node_ready.go:35] waiting up to 6m0s for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.779995   65177 node_ready.go:49] node "embed-certs-486436" has status "Ready":"True"
	I0723 15:25:36.780025   65177 node_ready.go:38] duration metric: took 23.62289ms for node "embed-certs-486436" to be "Ready" ...
	I0723 15:25:36.780039   65177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:36.807738   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 15:25:36.810749   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:36.820589   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 15:25:36.820613   65177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0723 15:25:36.880548   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 15:25:36.880581   65177 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 15:25:36.961807   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 15:25:36.962203   65177 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:36.962229   65177 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 15:25:37.055123   65177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 15:25:37.148724   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.148749   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149038   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149096   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.149114   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.149123   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.149412   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.149432   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161152   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:37.161173   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:37.161477   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:37.161496   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:37.161496   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.119897   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158050831s)
	I0723 15:25:38.120002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120022   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120358   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.120383   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.120399   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.120413   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.120361   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122012   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.122234   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.122252   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.401938   65177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.346767402s)
	I0723 15:25:38.402002   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402019   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402366   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402391   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402401   65177 main.go:141] libmachine: Making call to close driver server
	I0723 15:25:38.402409   65177 main.go:141] libmachine: (embed-certs-486436) Calling .Close
	I0723 15:25:38.402725   65177 main.go:141] libmachine: (embed-certs-486436) DBG | Closing plugin on server side
	I0723 15:25:38.402738   65177 main.go:141] libmachine: Successfully made call to close driver server
	I0723 15:25:38.402762   65177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0723 15:25:38.402773   65177 addons.go:475] Verifying addon metrics-server=true in "embed-certs-486436"
	I0723 15:25:38.404515   65177 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0723 15:25:36.154127   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.155104   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:38.405850   65177 addons.go:510] duration metric: took 1.90047622s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0723 15:25:38.816969   65177 pod_ready.go:102] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:39.316609   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.316632   65177 pod_ready.go:81] duration metric: took 2.505858486s for pod "coredns-7db6d8ff4d-hnlc7" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.316642   65177 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327865   65177 pod_ready.go:92] pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.327890   65177 pod_ready.go:81] duration metric: took 11.242778ms for pod "coredns-7db6d8ff4d-lj5xg" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.327900   65177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332886   65177 pod_ready.go:92] pod "etcd-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.332914   65177 pod_ready.go:81] duration metric: took 5.006846ms for pod "etcd-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.332925   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337166   65177 pod_ready.go:92] pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.337183   65177 pod_ready.go:81] duration metric: took 4.252609ms for pod "kube-apiserver-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.337198   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341748   65177 pod_ready.go:92] pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.341762   65177 pod_ready.go:81] duration metric: took 4.559215ms for pod "kube-controller-manager-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.341771   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714214   65177 pod_ready.go:92] pod "kube-proxy-wzh4d" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:39.714237   65177 pod_ready.go:81] duration metric: took 372.459367ms for pod "kube-proxy-wzh4d" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:39.714247   65177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114721   65177 pod_ready.go:92] pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace has status "Ready":"True"
	I0723 15:25:40.114744   65177 pod_ready.go:81] duration metric: took 400.490439ms for pod "kube-scheduler-embed-certs-486436" in "kube-system" namespace to be "Ready" ...
	I0723 15:25:40.114752   65177 pod_ready.go:38] duration metric: took 3.334700958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:40.114765   65177 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:40.114821   65177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:25:40.130577   65177 api_server.go:72] duration metric: took 3.625254211s to wait for apiserver process to appear ...
	I0723 15:25:40.130607   65177 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:25:40.130624   65177 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0723 15:25:40.134690   65177 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0723 15:25:40.135639   65177 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:40.135658   65177 api_server.go:131] duration metric: took 5.04581ms to wait for apiserver health ...
	I0723 15:25:40.135665   65177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:40.318436   65177 system_pods.go:59] 9 kube-system pods found
	I0723 15:25:40.318466   65177 system_pods.go:61] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.318471   65177 system_pods.go:61] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.318474   65177 system_pods.go:61] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.318478   65177 system_pods.go:61] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.318481   65177 system_pods.go:61] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.318484   65177 system_pods.go:61] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.318487   65177 system_pods.go:61] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.318492   65177 system_pods.go:61] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.318497   65177 system_pods.go:61] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.318506   65177 system_pods.go:74] duration metric: took 182.836785ms to wait for pod list to return data ...
	I0723 15:25:40.318514   65177 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.514737   65177 default_sa.go:45] found service account: "default"
	I0723 15:25:40.514768   65177 default_sa.go:55] duration metric: took 196.245408ms for default service account to be created ...
	I0723 15:25:40.514779   65177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.718646   65177 system_pods.go:86] 9 kube-system pods found
	I0723 15:25:40.718675   65177 system_pods.go:89] "coredns-7db6d8ff4d-hnlc7" [15da0e07-9db4-423d-b833-ee598822f88f] Running
	I0723 15:25:40.718684   65177 system_pods.go:89] "coredns-7db6d8ff4d-lj5xg" [3ca106cd-e6ab-4dc7-a602-3b304401d255] Running
	I0723 15:25:40.718690   65177 system_pods.go:89] "etcd-embed-certs-486436" [5effbb63-7030-4eaa-b0ae-cefe4ea63c02] Running
	I0723 15:25:40.718696   65177 system_pods.go:89] "kube-apiserver-embed-certs-486436" [616f5e6f-d4d5-419f-9335-e737999e975f] Running
	I0723 15:25:40.718702   65177 system_pods.go:89] "kube-controller-manager-embed-certs-486436" [b1b90791-d64a-41b9-9a09-cb3ffe3ede43] Running
	I0723 15:25:40.718707   65177 system_pods.go:89] "kube-proxy-wzh4d" [838e5bd5-75c9-4dcd-a49b-cd09b0bad7af] Running
	I0723 15:25:40.718713   65177 system_pods.go:89] "kube-scheduler-embed-certs-486436" [513dd710-a954-4f2b-9a37-d35c1758c028] Running
	I0723 15:25:40.718721   65177 system_pods.go:89] "metrics-server-569cc877fc-7l2jw" [d7796159-5366-4909-b019-84a0f104667f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.718728   65177 system_pods.go:89] "storage-provisioner" [c4a7dedd-e070-447a-b57a-9f19d00fb80b] Running
	I0723 15:25:40.718743   65177 system_pods.go:126] duration metric: took 203.95636ms to wait for k8s-apps to be running ...
	I0723 15:25:40.718756   65177 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.718809   65177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.733038   65177 system_svc.go:56] duration metric: took 14.275362ms WaitForService to wait for kubelet
	I0723 15:25:40.733069   65177 kubeadm.go:582] duration metric: took 4.227749087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.733088   65177 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.914859   65177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.914886   65177 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.914898   65177 node_conditions.go:105] duration metric: took 181.804872ms to run NodePressure ...
	I0723 15:25:40.914909   65177 start.go:241] waiting for startup goroutines ...
	I0723 15:25:40.914918   65177 start.go:246] waiting for cluster config update ...
	I0723 15:25:40.914932   65177 start.go:255] writing updated cluster config ...
	I0723 15:25:40.915235   65177 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:40.963735   65177 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:40.966048   65177 out.go:177] * Done! kubectl is now configured to use "embed-certs-486436" cluster and "default" namespace by default
	I0723 15:25:37.033161   66641 api_server.go:253] Checking apiserver healthz at https://192.168.61.64:8444/healthz ...
	I0723 15:25:37.039656   66641 api_server.go:279] https://192.168.61.64:8444/healthz returned 200:
	ok
	I0723 15:25:37.040745   66641 api_server.go:141] control plane version: v1.30.3
	I0723 15:25:37.040768   66641 api_server.go:131] duration metric: took 3.851781875s to wait for apiserver health ...
	I0723 15:25:37.040781   66641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:25:37.040807   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:37.040868   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:37.090495   66641 cri.go:89] found id: "96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:37.090524   66641 cri.go:89] found id: ""
	I0723 15:25:37.090533   66641 logs.go:276] 1 containers: [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e]
	I0723 15:25:37.090608   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.094934   66641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:37.095005   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:37.138911   66641 cri.go:89] found id: "e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.138937   66641 cri.go:89] found id: ""
	I0723 15:25:37.138947   66641 logs.go:276] 1 containers: [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0]
	I0723 15:25:37.139006   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.143876   66641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:37.143937   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:37.187419   66641 cri.go:89] found id: "b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.187446   66641 cri.go:89] found id: ""
	I0723 15:25:37.187455   66641 logs.go:276] 1 containers: [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344]
	I0723 15:25:37.187514   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.191818   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:37.191896   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:37.232332   66641 cri.go:89] found id: "9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:37.232358   66641 cri.go:89] found id: ""
	I0723 15:25:37.232366   66641 logs.go:276] 1 containers: [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3]
	I0723 15:25:37.232414   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.236718   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:37.236795   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:37.273231   66641 cri.go:89] found id: "48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.273259   66641 cri.go:89] found id: ""
	I0723 15:25:37.273269   66641 logs.go:276] 1 containers: [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb]
	I0723 15:25:37.273339   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.279499   66641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:37.279575   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:37.316848   66641 cri.go:89] found id: "bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.316867   66641 cri.go:89] found id: ""
	I0723 15:25:37.316875   66641 logs.go:276] 1 containers: [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da]
	I0723 15:25:37.316931   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.321920   66641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:37.321991   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:37.361804   66641 cri.go:89] found id: ""
	I0723 15:25:37.361833   66641 logs.go:276] 0 containers: []
	W0723 15:25:37.361844   66641 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:37.361850   66641 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:37.361909   66641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:37.401687   66641 cri.go:89] found id: "68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:37.401715   66641 cri.go:89] found id: "01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:37.401720   66641 cri.go:89] found id: ""
	I0723 15:25:37.401729   66641 logs.go:276] 2 containers: [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab]
	I0723 15:25:37.401788   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.406444   66641 ssh_runner.go:195] Run: which crictl
	I0723 15:25:37.410788   66641 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:37.410812   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:37.427033   66641 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:37.427063   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:37.567851   66641 logs.go:123] Gathering logs for etcd [e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0] ...
	I0723 15:25:37.567884   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e73340ee36d2f596ac1b39e490ae41748086b0e3b1b4bfe3d4b615879394cca0"
	I0723 15:25:37.633966   66641 logs.go:123] Gathering logs for coredns [b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344] ...
	I0723 15:25:37.634003   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b58d38beb8d0093d3d1b8463263a745f34f53fdf949b7339a763def30f132344"
	I0723 15:25:37.679663   66641 logs.go:123] Gathering logs for kube-proxy [48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb] ...
	I0723 15:25:37.679701   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a478b951b428456e5d42c22f6ed672dc5bbcd34cde13ebadf2d374f67dfbfb"
	I0723 15:25:37.715046   66641 logs.go:123] Gathering logs for kube-controller-manager [bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da] ...
	I0723 15:25:37.715084   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcc1ca16d82a0e0dee40ca1b4d61ea2caf6653a3b40eef4ea4668b8a7a1e89da"
	I0723 15:25:37.779870   66641 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:37.779917   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:38.166491   66641 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:38.166527   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:38.222592   66641 logs.go:123] Gathering logs for kube-apiserver [96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e] ...
	I0723 15:25:38.222625   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96e46e540ab2c02996f1d598d6c84c7eb3a7e349c8c7a2668b6cae7a0ed8ca8e"
	I0723 15:25:38.282823   66641 logs.go:123] Gathering logs for kube-scheduler [9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3] ...
	I0723 15:25:38.282864   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac0a72e3783109e331239ff407a9e74ec9853707293ef3cfc96f72254c0a1f3"
	I0723 15:25:38.320076   66641 logs.go:123] Gathering logs for storage-provisioner [68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868] ...
	I0723 15:25:38.320114   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68672c3e7b7b1040e08c48567eb55288adb9bda2dde409113ac39f3f4dafe868"
	I0723 15:25:38.361845   66641 logs.go:123] Gathering logs for storage-provisioner [01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab] ...
	I0723 15:25:38.361873   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a650a53706bb8a2aa4e700a8434a5e0ac79f45e313599b652ffc24cff0c6ab"
	I0723 15:25:38.404791   66641 logs.go:123] Gathering logs for container status ...
	I0723 15:25:38.404818   66641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:40.969345   66641 system_pods.go:59] 8 kube-system pods found
	I0723 15:25:40.969373   66641 system_pods.go:61] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.969378   66641 system_pods.go:61] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.969384   66641 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.969388   66641 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.969392   66641 system_pods.go:61] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.969395   66641 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.969403   66641 system_pods.go:61] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.969407   66641 system_pods.go:61] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.969419   66641 system_pods.go:74] duration metric: took 3.928631967s to wait for pod list to return data ...
	I0723 15:25:40.969430   66641 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:25:40.971647   66641 default_sa.go:45] found service account: "default"
	I0723 15:25:40.971668   66641 default_sa.go:55] duration metric: took 2.232202ms for default service account to be created ...
	I0723 15:25:40.971675   66641 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:25:40.976760   66641 system_pods.go:86] 8 kube-system pods found
	I0723 15:25:40.976782   66641 system_pods.go:89] "coredns-7db6d8ff4d-9qcfs" [663c125b-bed4-4622-8f0c-ff7837073bbd] Running
	I0723 15:25:40.976787   66641 system_pods.go:89] "etcd-default-k8s-diff-port-911217" [931a3c49-2bb2-4614-ad1b-ab8aced11e5b] Running
	I0723 15:25:40.976793   66641 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-911217" [5a5e188b-add1-43d0-a3b5-cfd6d2d76f01] Running
	I0723 15:25:40.976798   66641 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-911217" [6395180b-9d91-4ded-9f0f-44ce2a2c4ed4] Running
	I0723 15:25:40.976805   66641 system_pods.go:89] "kube-proxy-d4zwd" [55082c05-5fee-4c2a-ab31-897d838164d0] Running
	I0723 15:25:40.976809   66641 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-911217" [ca898ca4-44c6-4895-b11e-26ae25214a1e] Running
	I0723 15:25:40.976818   66641 system_pods.go:89] "metrics-server-569cc877fc-mkl8l" [9e129e04-b1b8-47e8-9c07-20cdc89705e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:25:40.976825   66641 system_pods.go:89] "storage-provisioner" [8a893464-6a36-4a91-9dde-8cb58d7dcfa8] Running
	I0723 15:25:40.976832   66641 system_pods.go:126] duration metric: took 5.152102ms to wait for k8s-apps to be running ...
	I0723 15:25:40.976838   66641 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:25:40.976875   66641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:25:40.996951   66641 system_svc.go:56] duration metric: took 20.10286ms WaitForService to wait for kubelet
	I0723 15:25:40.996983   66641 kubeadm.go:582] duration metric: took 4m24.133944078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:25:40.997007   66641 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:25:40.999958   66641 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:25:40.999980   66641 node_conditions.go:123] node cpu capacity is 2
	I0723 15:25:40.999991   66641 node_conditions.go:105] duration metric: took 2.97868ms to run NodePressure ...
	I0723 15:25:41.000002   66641 start.go:241] waiting for startup goroutines ...
	I0723 15:25:41.000008   66641 start.go:246] waiting for cluster config update ...
	I0723 15:25:41.000017   66641 start.go:255] writing updated cluster config ...
	I0723 15:25:41.000292   66641 ssh_runner.go:195] Run: rm -f paused
	I0723 15:25:41.058447   66641 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 15:25:41.060584   66641 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-911217" cluster and "default" namespace by default
	I0723 15:25:40.652692   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:42.653402   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:44.653499   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:47.153167   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:49.652723   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:51.653106   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:54.152382   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.153666   64842 pod_ready.go:102] pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace has status "Ready":"False"
	I0723 15:25:56.652308   64842 pod_ready.go:81] duration metric: took 4m0.005573507s for pod "metrics-server-78fcd8795b-dsfmg" in "kube-system" namespace to be "Ready" ...
	E0723 15:25:56.652340   64842 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0723 15:25:56.652348   64842 pod_ready.go:38] duration metric: took 4m3.607231702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 15:25:56.652364   64842 api_server.go:52] waiting for apiserver process to appear ...
	I0723 15:25:56.652389   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:25:56.652432   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:25:56.709002   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:56.709024   64842 cri.go:89] found id: ""
	I0723 15:25:56.709031   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:25:56.709076   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.713436   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:25:56.713496   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:25:56.748180   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:56.748203   64842 cri.go:89] found id: ""
	I0723 15:25:56.748212   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:25:56.748267   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.753878   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:25:56.753950   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:25:56.790420   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:56.790443   64842 cri.go:89] found id: ""
	I0723 15:25:56.790450   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:25:56.790503   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.794360   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:25:56.794430   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:25:56.833056   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:56.833084   64842 cri.go:89] found id: ""
	I0723 15:25:56.833093   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:25:56.833158   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.838040   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:25:56.838097   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:25:56.877548   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:56.877569   64842 cri.go:89] found id: ""
	I0723 15:25:56.877576   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:25:56.877620   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.881682   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:25:56.881754   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:25:56.931794   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:56.931821   64842 cri.go:89] found id: ""
	I0723 15:25:56.931831   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:25:56.931903   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:56.936454   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:25:56.936529   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:25:56.974347   64842 cri.go:89] found id: ""
	I0723 15:25:56.974373   64842 logs.go:276] 0 containers: []
	W0723 15:25:56.974401   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:25:56.974411   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:25:56.974595   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:25:57.008960   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:57.008986   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:25:57.008990   64842 cri.go:89] found id: ""
	I0723 15:25:57.008997   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:25:57.009044   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.013403   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:25:57.017022   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:25:57.017041   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:25:57.031010   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:25:57.031038   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:25:57.162515   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:25:57.162548   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:25:57.202805   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:25:57.202840   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:25:57.238593   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:25:57.238622   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:25:57.740811   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:25:57.740854   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:25:57.786125   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:25:57.786154   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:25:57.839346   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:25:57.839389   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:25:57.885507   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:25:57.885545   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:25:57.923025   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:25:57.923058   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:25:57.961082   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:25:57.961112   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:25:58.013561   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:25:58.013602   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:25:58.051695   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:25:58.051733   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.585802   64842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:26:00.601135   64842 api_server.go:72] duration metric: took 4m14.792155211s to wait for apiserver process to appear ...
	I0723 15:26:00.601167   64842 api_server.go:88] waiting for apiserver healthz status ...
	I0723 15:26:00.601210   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:00.601269   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:00.641653   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:00.641678   64842 cri.go:89] found id: ""
	I0723 15:26:00.641687   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:00.641751   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.645831   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:00.645886   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:00.684737   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:00.684763   64842 cri.go:89] found id: ""
	I0723 15:26:00.684773   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:00.684836   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.689094   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:00.689140   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:00.725761   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:00.725787   64842 cri.go:89] found id: ""
	I0723 15:26:00.725795   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:00.725838   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.729843   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:00.729928   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:00.769870   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:00.769890   64842 cri.go:89] found id: ""
	I0723 15:26:00.769897   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:00.769942   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.774178   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:00.774235   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:00.816236   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:00.816261   64842 cri.go:89] found id: ""
	I0723 15:26:00.816268   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:00.816315   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.820577   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:00.820632   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:00.866824   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:00.866849   64842 cri.go:89] found id: ""
	I0723 15:26:00.866857   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:00.866910   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.871035   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:00.871089   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:00.913991   64842 cri.go:89] found id: ""
	I0723 15:26:00.914020   64842 logs.go:276] 0 containers: []
	W0723 15:26:00.914029   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:00.914035   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:00.914091   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:00.954766   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:00.954789   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.954795   64842 cri.go:89] found id: ""
	I0723 15:26:00.954804   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:00.954855   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.959067   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:00.962784   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:00.962807   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:00.998749   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:00.998781   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:01.454863   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:01.454902   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:01.505800   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:01.505829   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:01.555977   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:01.556008   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:01.591914   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:01.591942   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:01.649054   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:01.649083   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:01.682090   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:01.682116   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:01.721805   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:01.721832   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:01.758403   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:01.758432   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:01.808766   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:01.808803   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:01.823556   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:01.823589   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:01.936323   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:01.936355   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.478126   64842 api_server.go:253] Checking apiserver healthz at https://192.168.72.227:8443/healthz ...
	I0723 15:26:04.483667   64842 api_server.go:279] https://192.168.72.227:8443/healthz returned 200:
	ok
	I0723 15:26:04.484710   64842 api_server.go:141] control plane version: v1.31.0-beta.0
	I0723 15:26:04.484730   64842 api_server.go:131] duration metric: took 3.883557615s to wait for apiserver health ...
	I0723 15:26:04.484737   64842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 15:26:04.484759   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:26:04.484810   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:26:04.522732   64842 cri.go:89] found id: "64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:04.522757   64842 cri.go:89] found id: ""
	I0723 15:26:04.522766   64842 logs.go:276] 1 containers: [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e]
	I0723 15:26:04.522825   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.526922   64842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:26:04.526986   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:26:04.572736   64842 cri.go:89] found id: "e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.572761   64842 cri.go:89] found id: ""
	I0723 15:26:04.572770   64842 logs.go:276] 1 containers: [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0]
	I0723 15:26:04.572828   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.576911   64842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:26:04.576966   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:26:04.612283   64842 cri.go:89] found id: "289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.612310   64842 cri.go:89] found id: ""
	I0723 15:26:04.612318   64842 logs.go:276] 1 containers: [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca]
	I0723 15:26:04.612367   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.616609   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:26:04.616660   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:26:04.653775   64842 cri.go:89] found id: "bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:04.653800   64842 cri.go:89] found id: ""
	I0723 15:26:04.653808   64842 logs.go:276] 1 containers: [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14]
	I0723 15:26:04.653883   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.658242   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:26:04.658298   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:26:04.699132   64842 cri.go:89] found id: "62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.699155   64842 cri.go:89] found id: ""
	I0723 15:26:04.699164   64842 logs.go:276] 1 containers: [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca]
	I0723 15:26:04.699225   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.703672   64842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:26:04.703735   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:26:04.740522   64842 cri.go:89] found id: "7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:04.740541   64842 cri.go:89] found id: ""
	I0723 15:26:04.740548   64842 logs.go:276] 1 containers: [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d]
	I0723 15:26:04.740605   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.745065   64842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:26:04.745134   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:26:04.779209   64842 cri.go:89] found id: ""
	I0723 15:26:04.779234   64842 logs.go:276] 0 containers: []
	W0723 15:26:04.779242   64842 logs.go:278] No container was found matching "kindnet"
	I0723 15:26:04.779255   64842 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0723 15:26:04.779321   64842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0723 15:26:04.816696   64842 cri.go:89] found id: "33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.816713   64842 cri.go:89] found id: "2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:04.816718   64842 cri.go:89] found id: ""
	I0723 15:26:04.816728   64842 logs.go:276] 2 containers: [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6]
	I0723 15:26:04.816777   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.820775   64842 ssh_runner.go:195] Run: which crictl
	I0723 15:26:04.824335   64842 logs.go:123] Gathering logs for etcd [e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0] ...
	I0723 15:26:04.824362   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e23570772b1baacb7f8dab7b740a0ee746844e22969226336bf53f01ee03b8c0"
	I0723 15:26:04.865073   64842 logs.go:123] Gathering logs for coredns [289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca] ...
	I0723 15:26:04.865105   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289a796ff2c743121776609422d6ae661e0a5395cb6a9cf7e732630cfb9a9aca"
	I0723 15:26:04.903588   64842 logs.go:123] Gathering logs for kube-proxy [62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca] ...
	I0723 15:26:04.903617   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62a5ee505542b6edd7c3f6ccdda92bac894120ad61cf33d95273869b1fae3bca"
	I0723 15:26:04.939994   64842 logs.go:123] Gathering logs for storage-provisioner [33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7] ...
	I0723 15:26:04.940022   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33bc08508dd461698184fd969f59f39d27624fd8947c1a828a91abac1e7cecb7"
	I0723 15:26:04.976373   64842 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:26:04.976402   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:26:05.355834   64842 logs.go:123] Gathering logs for kubelet ...
	I0723 15:26:05.355877   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:26:05.410198   64842 logs.go:123] Gathering logs for dmesg ...
	I0723 15:26:05.410228   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:26:05.424358   64842 logs.go:123] Gathering logs for kube-apiserver [64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e] ...
	I0723 15:26:05.424391   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d77a0d9b5ed7778bfc4a178ca72634936279a2425fe49d9487259596d5c09e"
	I0723 15:26:05.464494   64842 logs.go:123] Gathering logs for storage-provisioner [2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6] ...
	I0723 15:26:05.464526   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d2d4409a7d9adbf71a5151eb4a4ec284ccbfbc66ea76eea4101e358dea75aa6"
	I0723 15:26:05.496709   64842 logs.go:123] Gathering logs for container status ...
	I0723 15:26:05.496736   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:26:05.534919   64842 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:26:05.534959   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 15:26:05.640875   64842 logs.go:123] Gathering logs for kube-scheduler [bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14] ...
	I0723 15:26:05.640913   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf775206fb2d83c2c5a6584b59e547c67d31ac0764d5096b5f6a206327b3c14"
	I0723 15:26:05.678050   64842 logs.go:123] Gathering logs for kube-controller-manager [7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d] ...
	I0723 15:26:05.678078   64842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7006aba67d59f664e8c0125e66a3020f549edf3f3db97fbacedc8e71adb07f6d"
	I0723 15:26:08.236070   64842 system_pods.go:59] 8 kube-system pods found
	I0723 15:26:08.236336   64842 system_pods.go:61] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.236346   64842 system_pods.go:61] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.236351   64842 system_pods.go:61] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.236354   64842 system_pods.go:61] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.236357   64842 system_pods.go:61] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.236360   64842 system_pods.go:61] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.236368   64842 system_pods.go:61] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.236376   64842 system_pods.go:61] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.236382   64842 system_pods.go:74] duration metric: took 3.751640289s to wait for pod list to return data ...
	I0723 15:26:08.236391   64842 default_sa.go:34] waiting for default service account to be created ...
	I0723 15:26:08.239339   64842 default_sa.go:45] found service account: "default"
	I0723 15:26:08.239367   64842 default_sa.go:55] duration metric: took 2.96931ms for default service account to be created ...
	I0723 15:26:08.239378   64842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 15:26:08.244406   64842 system_pods.go:86] 8 kube-system pods found
	I0723 15:26:08.244432   64842 system_pods.go:89] "coredns-5cfdc65f69-v2bhl" [795d8c55-65e3-46c6-9b06-71f89ff17310] Running
	I0723 15:26:08.244438   64842 system_pods.go:89] "etcd-no-preload-543029" [b68780d4-7058-4b47-a37e-52d31c536669] Running
	I0723 15:26:08.244442   64842 system_pods.go:89] "kube-apiserver-no-preload-543029" [bc8ea63b-6b59-4fb2-8f3b-dcc06c6ac7c7] Running
	I0723 15:26:08.244447   64842 system_pods.go:89] "kube-controller-manager-no-preload-543029" [be582281-d854-42be-a116-bf3f99694789] Running
	I0723 15:26:08.244451   64842 system_pods.go:89] "kube-proxy-wzbps" [daefb252-a4db-4952-88fe-1e8e082a7625] Running
	I0723 15:26:08.244455   64842 system_pods.go:89] "kube-scheduler-no-preload-543029" [488b14d8-ecbf-446c-93e4-f6ea8763bd7d] Running
	I0723 15:26:08.244462   64842 system_pods.go:89] "metrics-server-78fcd8795b-dsfmg" [98637dfb-5600-4b7d-9272-ac5c5172d67b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0723 15:26:08.244468   64842 system_pods.go:89] "storage-provisioner" [96cee44d-4674-4d8b-8d1b-d6a8578d5bd0] Running
	I0723 15:26:08.244474   64842 system_pods.go:126] duration metric: took 5.091237ms to wait for k8s-apps to be running ...
	I0723 15:26:08.244481   64842 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 15:26:08.244521   64842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:08.260574   64842 system_svc.go:56] duration metric: took 16.083672ms WaitForService to wait for kubelet
	I0723 15:26:08.260610   64842 kubeadm.go:582] duration metric: took 4m22.451635049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 15:26:08.260634   64842 node_conditions.go:102] verifying NodePressure condition ...
	I0723 15:26:08.263927   64842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0723 15:26:08.263954   64842 node_conditions.go:123] node cpu capacity is 2
	I0723 15:26:08.263966   64842 node_conditions.go:105] duration metric: took 3.324706ms to run NodePressure ...
	I0723 15:26:08.263977   64842 start.go:241] waiting for startup goroutines ...
	I0723 15:26:08.263983   64842 start.go:246] waiting for cluster config update ...
	I0723 15:26:08.263992   64842 start.go:255] writing updated cluster config ...
	I0723 15:26:08.264250   64842 ssh_runner.go:195] Run: rm -f paused
	I0723 15:26:08.312776   64842 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0723 15:26:08.315009   64842 out.go:177] * Done! kubectl is now configured to use "no-preload-543029" cluster and "default" namespace by default
	I0723 15:26:54.925074   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:26:54.925180   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:26:54.926872   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:54.926940   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:54.927022   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:54.927137   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:54.927252   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:54.927339   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:54.929261   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:54.929337   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:54.929399   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:54.929472   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:54.929580   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:54.929678   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:54.929758   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:54.929836   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:54.929924   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:54.930026   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:54.930118   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:54.930165   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:54.930210   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:54.930257   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:54.930300   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:54.930371   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:54.930438   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:54.930535   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:54.930631   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:54.930663   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:54.930752   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:54.932218   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:54.932344   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:54.932445   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:54.932537   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:54.932653   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:54.932869   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:26:54.932943   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:26:54.933025   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933337   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933600   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933701   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.933890   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.933995   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934238   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934331   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:26:54.934535   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:26:54.934546   65605 kubeadm.go:310] 
	I0723 15:26:54.934600   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:26:54.934663   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:26:54.934673   65605 kubeadm.go:310] 
	I0723 15:26:54.934723   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:26:54.934762   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:26:54.934848   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:26:54.934855   65605 kubeadm.go:310] 
	I0723 15:26:54.934948   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:26:54.934979   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:26:54.935026   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:26:54.935034   65605 kubeadm.go:310] 
	I0723 15:26:54.935136   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:26:54.935255   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:26:54.935265   65605 kubeadm.go:310] 
	I0723 15:26:54.935410   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:26:54.935519   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:26:54.935578   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:26:54.935637   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:26:54.935693   65605 kubeadm.go:310] 
	W0723 15:26:54.935756   65605 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0723 15:26:54.935811   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0723 15:26:55.388601   65605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:26:55.402519   65605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 15:26:55.412031   65605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 15:26:55.412054   65605 kubeadm.go:157] found existing configuration files:
	
	I0723 15:26:55.412097   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 15:26:55.423092   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 15:26:55.423146   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 15:26:55.432321   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 15:26:55.441379   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 15:26:55.441447   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 15:26:55.450733   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.459263   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 15:26:55.459333   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 15:26:55.468488   65605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 15:26:55.477223   65605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 15:26:55.477277   65605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 15:26:55.485924   65605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0723 15:26:55.555024   65605 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0723 15:26:55.555097   65605 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 15:26:55.695658   65605 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 15:26:55.695814   65605 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 15:26:55.695939   65605 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 15:26:55.867103   65605 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 15:26:55.870203   65605 out.go:204]   - Generating certificates and keys ...
	I0723 15:26:55.870299   65605 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 15:26:55.870407   65605 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 15:26:55.870490   65605 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0723 15:26:55.870568   65605 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0723 15:26:55.870655   65605 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0723 15:26:55.870733   65605 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0723 15:26:55.870813   65605 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0723 15:26:55.870861   65605 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0723 15:26:55.870920   65605 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0723 15:26:55.870985   65605 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0723 15:26:55.871016   65605 kubeadm.go:310] [certs] Using the existing "sa" key
	I0723 15:26:55.871063   65605 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 15:26:55.963452   65605 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 15:26:56.554450   65605 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 15:26:57.109698   65605 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 15:26:57.223533   65605 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 15:26:57.243368   65605 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 15:26:57.244331   65605 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 15:26:57.244378   65605 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 15:26:57.375340   65605 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 15:26:57.377119   65605 out.go:204]   - Booting up control plane ...
	I0723 15:26:57.377234   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 15:26:57.386697   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 15:26:57.388552   65605 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 15:26:57.389505   65605 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 15:26:57.391792   65605 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0723 15:27:37.394425   65605 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0723 15:27:37.394534   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:37.394766   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:42.395393   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:42.395663   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:27:52.395847   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:27:52.396071   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:12.396192   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:12.396413   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395047   65605 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0723 15:28:52.395369   65605 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0723 15:28:52.395384   65605 kubeadm.go:310] 
	I0723 15:28:52.395457   65605 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0723 15:28:52.395531   65605 kubeadm.go:310] 		timed out waiting for the condition
	I0723 15:28:52.395542   65605 kubeadm.go:310] 
	I0723 15:28:52.395588   65605 kubeadm.go:310] 	This error is likely caused by:
	I0723 15:28:52.395619   65605 kubeadm.go:310] 		- The kubelet is not running
	I0723 15:28:52.395780   65605 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0723 15:28:52.395809   65605 kubeadm.go:310] 
	I0723 15:28:52.395964   65605 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0723 15:28:52.396028   65605 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0723 15:28:52.396084   65605 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0723 15:28:52.396095   65605 kubeadm.go:310] 
	I0723 15:28:52.396194   65605 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0723 15:28:52.396276   65605 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0723 15:28:52.396286   65605 kubeadm.go:310] 
	I0723 15:28:52.396449   65605 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0723 15:28:52.396552   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0723 15:28:52.396649   65605 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0723 15:28:52.396744   65605 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0723 15:28:52.396752   65605 kubeadm.go:310] 
	I0723 15:28:52.397220   65605 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 15:28:52.397322   65605 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0723 15:28:52.397397   65605 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0723 15:28:52.397473   65605 kubeadm.go:394] duration metric: took 8m2.354906945s to StartCluster
	I0723 15:28:52.397516   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 15:28:52.397573   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 15:28:52.442298   65605 cri.go:89] found id: ""
	I0723 15:28:52.442328   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.442339   65605 logs.go:278] No container was found matching "kube-apiserver"
	I0723 15:28:52.442347   65605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 15:28:52.442422   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 15:28:52.476108   65605 cri.go:89] found id: ""
	I0723 15:28:52.476131   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.476138   65605 logs.go:278] No container was found matching "etcd"
	I0723 15:28:52.476144   65605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 15:28:52.476205   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 15:28:52.511118   65605 cri.go:89] found id: ""
	I0723 15:28:52.511143   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.511152   65605 logs.go:278] No container was found matching "coredns"
	I0723 15:28:52.511159   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 15:28:52.511224   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 15:28:52.544901   65605 cri.go:89] found id: ""
	I0723 15:28:52.544934   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.544946   65605 logs.go:278] No container was found matching "kube-scheduler"
	I0723 15:28:52.544954   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 15:28:52.545020   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 15:28:52.580472   65605 cri.go:89] found id: ""
	I0723 15:28:52.580494   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.580501   65605 logs.go:278] No container was found matching "kube-proxy"
	I0723 15:28:52.580515   65605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 15:28:52.580577   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 15:28:52.613777   65605 cri.go:89] found id: ""
	I0723 15:28:52.613808   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.613818   65605 logs.go:278] No container was found matching "kube-controller-manager"
	I0723 15:28:52.613826   65605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 15:28:52.613894   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 15:28:52.650831   65605 cri.go:89] found id: ""
	I0723 15:28:52.650961   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.650974   65605 logs.go:278] No container was found matching "kindnet"
	I0723 15:28:52.650982   65605 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0723 15:28:52.651048   65605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0723 15:28:52.684805   65605 cri.go:89] found id: ""
	I0723 15:28:52.684833   65605 logs.go:276] 0 containers: []
	W0723 15:28:52.684845   65605 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0723 15:28:52.684857   65605 logs.go:123] Gathering logs for CRI-O ...
	I0723 15:28:52.684873   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 15:28:52.787532   65605 logs.go:123] Gathering logs for container status ...
	I0723 15:28:52.787583   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 15:28:52.843947   65605 logs.go:123] Gathering logs for kubelet ...
	I0723 15:28:52.843979   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0723 15:28:52.894679   65605 logs.go:123] Gathering logs for dmesg ...
	I0723 15:28:52.894714   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 15:28:52.910794   65605 logs.go:123] Gathering logs for describe nodes ...
	I0723 15:28:52.910821   65605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0723 15:28:52.989285   65605 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0723 15:28:52.989325   65605 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0723 15:28:52.989368   65605 out.go:239] * 
	W0723 15:28:52.989432   65605 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.989465   65605 out.go:239] * 
	W0723 15:28:52.990350   65605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0723 15:28:52.993770   65605 out.go:177] 
	W0723 15:28:52.995023   65605 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0723 15:28:52.995076   65605 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0723 15:28:52.995095   65605 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0723 15:28:52.996528   65605 out.go:177] 
	
	
	==> CRI-O <==
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.850153855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749181850117104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5711ff9-0613-4207-ad42-d6030161d726 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.850811298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7425dabf-ca01-46a7-ad37-b714f4b073b2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.850884706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7425dabf-ca01-46a7-ad37-b714f4b073b2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.850953018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7425dabf-ca01-46a7-ad37-b714f4b073b2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.880005288Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27ab0819-3aed-4dd7-95d9-d99ce5d96757 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.880105559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27ab0819-3aed-4dd7-95d9-d99ce5d96757 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.881404165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9beb60f-5bdc-4026-8873-d7cf53932c32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.881827827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749181881802219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9beb60f-5bdc-4026-8873-d7cf53932c32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.882238932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=caf2cd11-908d-4aad-8848-3106913a5fb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.882305017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=caf2cd11-908d-4aad-8848-3106913a5fb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.882345478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=caf2cd11-908d-4aad-8848-3106913a5fb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.915935105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dce18489-cbc3-44db-b5f7-a13a88b4ffdb name=/runtime.v1.RuntimeService/Version
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.916046429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dce18489-cbc3-44db-b5f7-a13a88b4ffdb name=/runtime.v1.RuntimeService/Version
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.917288614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c62970f-a03d-4cda-8a8e-2c4ad055ccf5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.917729504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749181917704931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c62970f-a03d-4cda-8a8e-2c4ad055ccf5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.918154878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45e4ee28-d463-4dc8-bcd6-2a26a119e02b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.918225157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45e4ee28-d463-4dc8-bcd6-2a26a119e02b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.918258293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=45e4ee28-d463-4dc8-bcd6-2a26a119e02b name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.950019937Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98f0db29-6ea8-41cd-af89-952f6a216e12 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.950141763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98f0db29-6ea8-41cd-af89-952f6a216e12 name=/runtime.v1.RuntimeService/Version
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.951750317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3a451ef-dd80-4643-878e-e0d40b273fcf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.952347074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721749181952311101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3a451ef-dd80-4643-878e-e0d40b273fcf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.953109758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5e0113c-1a82-4eaa-bed7-c7f3701fb6c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.953201279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5e0113c-1a82-4eaa-bed7-c7f3701fb6c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 23 15:39:41 old-k8s-version-000272 crio[653]: time="2024-07-23 15:39:41.953236415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a5e0113c-1a82-4eaa-bed7-c7f3701fb6c3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul23 15:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051105] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.906859] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.937543] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.495630] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.117641] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061578] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.222393] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.111093] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.239582] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.000298] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.060522] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958927] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Jul23 15:21] kauditd_printk_skb: 46 callbacks suppressed
	[Jul23 15:24] systemd-fstab-generator[5081]: Ignoring "noauto" option for root device
	[Jul23 15:26] systemd-fstab-generator[5360]: Ignoring "noauto" option for root device
	[  +0.066445] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:39:42 up 19 min,  0 users,  load average: 0.00, 0.00, 0.01
	Linux old-k8s-version-000272 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c8ec40, 0xc000c91660)
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: goroutine 155 [chan receive]:
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000c75320)
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: goroutine 156 [select]:
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d4def0, 0x4f0ac20, 0xc000c83130, 0x1, 0xc00009e0c0)
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024e540, 0xc00009e0c0)
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c8ec80, 0xc000c91720)
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 23 15:39:42 old-k8s-version-000272 kubelet[6800]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 23 15:39:42 old-k8s-version-000272 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 23 15:39:42 old-k8s-version-000272 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 2 (227.418038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-000272" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (103.61s)

                                                
                                    

Test pass (259/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.45
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 17.87
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.42
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 11.82
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.55
31 TestOffline 122.77
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 143.47
38 TestAddons/parallel/Registry 18.47
40 TestAddons/parallel/InspektorGadget 11.11
42 TestAddons/parallel/HelmTiller 13.14
44 TestAddons/parallel/CSI 105.22
45 TestAddons/parallel/Headlamp 14.02
46 TestAddons/parallel/CloudSpanner 5.57
47 TestAddons/parallel/LocalPath 55.11
48 TestAddons/parallel/NvidiaDevicePlugin 5.65
49 TestAddons/parallel/Yakd 6
53 TestAddons/serial/GCPAuth/Namespaces 0.12
55 TestCertOptions 62.21
56 TestCertExpiration 267.34
58 TestForceSystemdFlag 57.87
59 TestForceSystemdEnv 41.93
61 TestKVMDriverInstallOrUpdate 3.8
65 TestErrorSpam/setup 38.93
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.45
69 TestErrorSpam/unpause 1.49
70 TestErrorSpam/stop 4.74
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 56.71
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 31.8
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
82 TestFunctional/serial/CacheCmd/cache/add_local 2.05
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 28.94
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.39
93 TestFunctional/serial/LogsFileCmd 1.38
94 TestFunctional/serial/InvalidService 3.92
96 TestFunctional/parallel/ConfigCmd 0.28
97 TestFunctional/parallel/DashboardCmd 12.49
98 TestFunctional/parallel/DryRun 0.31
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.85
104 TestFunctional/parallel/ServiceCmdConnect 19.46
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 44.35
108 TestFunctional/parallel/SSHCmd 0.39
109 TestFunctional/parallel/CpCmd 1.21
110 TestFunctional/parallel/MySQL 21.73
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.25
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
120 TestFunctional/parallel/License 0.55
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
128 TestFunctional/parallel/ImageCommands/ImageBuild 3.49
129 TestFunctional/parallel/ImageCommands/Setup 1.79
130 TestFunctional/parallel/Version/short 0.04
131 TestFunctional/parallel/Version/components 0.52
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
142 TestFunctional/parallel/MountCmd/any-port 19.41
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.11
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.09
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.15
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
149 TestFunctional/parallel/MountCmd/specific-port 1.75
150 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
151 TestFunctional/parallel/MountCmd/VerifyCleanup 0.82
152 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
153 TestFunctional/parallel/ProfileCmd/profile_list 0.32
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
155 TestFunctional/parallel/ServiceCmd/List 1.29
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
158 TestFunctional/parallel/ServiceCmd/Format 0.29
159 TestFunctional/parallel/ServiceCmd/URL 0.3
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 227.39
167 TestMultiControlPlane/serial/DeployApp 6.47
168 TestMultiControlPlane/serial/PingHostFromPods 1.17
169 TestMultiControlPlane/serial/AddWorkerNode 59.67
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.53
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.18
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 314.31
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
183 TestMultiControlPlane/serial/AddSecondaryNode 77.41
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
188 TestJSONOutput/start/Command 96.11
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.67
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.59
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.33
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 87.23
220 TestMountStart/serial/StartWithMountFirst 25.41
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 24.32
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.67
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 23.64
228 TestMountStart/serial/VerifyMountPostStop 0.35
231 TestMultiNode/serial/FreshStart2Nodes 119.65
232 TestMultiNode/serial/DeployApp2Nodes 5.24
233 TestMultiNode/serial/PingHostFrom2Pods 0.77
234 TestMultiNode/serial/AddNode 47.43
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.95
238 TestMultiNode/serial/StopNode 2.26
239 TestMultiNode/serial/StartAfterStop 39.76
241 TestMultiNode/serial/DeleteNode 2.4
243 TestMultiNode/serial/RestartMultiNode 179.98
244 TestMultiNode/serial/ValidateNameConflict 39.94
251 TestScheduledStopUnix 113.17
255 TestRunningBinaryUpgrade 218.92
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 94.52
270 TestPause/serial/Start 151.91
271 TestNoKubernetes/serial/StartWithStopK8s 39.42
272 TestNoKubernetes/serial/Start 51.04
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
274 TestNoKubernetes/serial/ProfileList 29.36
275 TestPause/serial/SecondStartNoReconfiguration 40.65
276 TestNoKubernetes/serial/Stop 1.3
277 TestNoKubernetes/serial/StartNoArgs 21.02
278 TestPause/serial/Pause 0.74
279 TestPause/serial/VerifyStatus 0.23
280 TestPause/serial/Unpause 0.66
281 TestPause/serial/PauseAgain 0.85
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
283 TestPause/serial/DeletePaused 1.02
284 TestPause/serial/VerifyDeletedResources 34.97
292 TestNetworkPlugins/group/false 3.15
296 TestStoppedBinaryUpgrade/Setup 2.34
297 TestStoppedBinaryUpgrade/Upgrade 113.59
301 TestStartStop/group/no-preload/serial/FirstStart 87.11
302 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
304 TestStartStop/group/embed-certs/serial/FirstStart 105.78
305 TestStartStop/group/no-preload/serial/DeployApp 10.27
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.92
308 TestStartStop/group/embed-certs/serial/DeployApp 10.27
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.1
316 TestStartStop/group/no-preload/serial/SecondStart 653.22
318 TestStartStop/group/embed-certs/serial/SecondStart 593.14
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
322 TestStartStop/group/old-k8s-version/serial/Stop 3.54
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 419.39
336 TestStartStop/group/newest-cni/serial/FirstStart 44.53
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
339 TestStartStop/group/newest-cni/serial/Stop 2.34
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
341 TestStartStop/group/newest-cni/serial/SecondStart 37.85
342 TestNetworkPlugins/group/auto/Start 93.44
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
346 TestStartStop/group/newest-cni/serial/Pause 2.63
347 TestNetworkPlugins/group/kindnet/Start 93.74
348 TestNetworkPlugins/group/calico/Start 123.98
349 TestNetworkPlugins/group/auto/KubeletFlags 0.2
350 TestNetworkPlugins/group/auto/NetCatPod 11.22
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/auto/DNS 0.16
353 TestNetworkPlugins/group/auto/Localhost 0.17
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
356 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
357 TestNetworkPlugins/group/kindnet/DNS 0.16
358 TestNetworkPlugins/group/kindnet/Localhost 0.17
359 TestNetworkPlugins/group/kindnet/HairPin 0.18
360 TestNetworkPlugins/group/custom-flannel/Start 86.64
361 TestNetworkPlugins/group/enable-default-cni/Start 116.35
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.23
364 TestNetworkPlugins/group/calico/NetCatPod 11.2
365 TestNetworkPlugins/group/calico/DNS 0.24
366 TestNetworkPlugins/group/calico/Localhost 0.23
367 TestNetworkPlugins/group/calico/HairPin 0.13
368 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
369 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
370 TestNetworkPlugins/group/flannel/Start 95.31
371 TestNetworkPlugins/group/bridge/Start 89.67
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.35
374 TestNetworkPlugins/group/custom-flannel/DNS 0.16
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
384 TestNetworkPlugins/group/bridge/NetCatPod 10.22
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
386 TestNetworkPlugins/group/flannel/NetCatPod 11.25
387 TestNetworkPlugins/group/bridge/DNS 0.15
388 TestNetworkPlugins/group/bridge/Localhost 0.13
389 TestNetworkPlugins/group/bridge/HairPin 0.15
390 TestNetworkPlugins/group/flannel/DNS 0.15
391 TestNetworkPlugins/group/flannel/Localhost 0.12
392 TestNetworkPlugins/group/flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (23.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-344682 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-344682 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.447220115s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-344682
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-344682: exit status 85 (55.224703ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-344682 | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC |          |
	|         | -p download-only-344682        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 13:56:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 13:56:30.868181   18515 out.go:291] Setting OutFile to fd 1 ...
	I0723 13:56:30.868453   18515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:56:30.868463   18515 out.go:304] Setting ErrFile to fd 2...
	I0723 13:56:30.868467   18515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:56:30.868634   18515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	W0723 13:56:30.868749   18515 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19319-11303/.minikube/config/config.json: open /home/jenkins/minikube-integration/19319-11303/.minikube/config/config.json: no such file or directory
	I0723 13:56:30.869380   18515 out.go:298] Setting JSON to true
	I0723 13:56:30.870280   18515 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2337,"bootTime":1721740654,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 13:56:30.870343   18515 start.go:139] virtualization: kvm guest
	I0723 13:56:30.872838   18515 out.go:97] [download-only-344682] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0723 13:56:30.872974   18515 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball: no such file or directory
	I0723 13:56:30.873053   18515 notify.go:220] Checking for updates...
	I0723 13:56:30.874537   18515 out.go:169] MINIKUBE_LOCATION=19319
	I0723 13:56:30.876191   18515 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 13:56:30.877637   18515 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 13:56:30.879064   18515 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:56:30.880494   18515 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0723 13:56:30.883053   18515 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 13:56:30.883233   18515 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 13:56:30.984259   18515 out.go:97] Using the kvm2 driver based on user configuration
	I0723 13:56:30.984310   18515 start.go:297] selected driver: kvm2
	I0723 13:56:30.984316   18515 start.go:901] validating driver "kvm2" against <nil>
	I0723 13:56:30.984709   18515 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:56:30.984855   18515 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 13:56:30.999682   18515 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 13:56:30.999739   18515 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 13:56:31.000259   18515 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0723 13:56:31.000451   18515 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 13:56:31.000477   18515 cni.go:84] Creating CNI manager for ""
	I0723 13:56:31.000487   18515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:56:31.000498   18515 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 13:56:31.000563   18515 start.go:340] cluster config:
	{Name:download-only-344682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-344682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 13:56:31.000808   18515 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:56:31.003069   18515 out.go:97] Downloading VM boot image ...
	I0723 13:56:31.003108   18515 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0723 13:56:40.852085   18515 out.go:97] Starting "download-only-344682" primary control-plane node in "download-only-344682" cluster
	I0723 13:56:40.852119   18515 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 13:56:40.947341   18515 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0723 13:56:40.947379   18515 cache.go:56] Caching tarball of preloaded images
	I0723 13:56:40.947553   18515 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 13:56:40.949583   18515 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0723 13:56:40.949599   18515 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0723 13:56:41.055359   18515 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-344682 host does not exist
	  To start a cluster, run: "minikube start -p download-only-344682"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-344682
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (17.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-055184 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-055184 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.869312042s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (17.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-055184
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-055184: exit status 85 (416.205489ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-344682 | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC |                     |
	|         | -p download-only-344682        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC | 23 Jul 24 13:56 UTC |
	| delete  | -p download-only-344682        | download-only-344682 | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC | 23 Jul 24 13:56 UTC |
	| start   | -o=json --download-only        | download-only-055184 | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC |                     |
	|         | -p download-only-055184        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 13:56:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 13:56:54.623867   18771 out.go:291] Setting OutFile to fd 1 ...
	I0723 13:56:54.623984   18771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:56:54.623993   18771 out.go:304] Setting ErrFile to fd 2...
	I0723 13:56:54.623997   18771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:56:54.624155   18771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 13:56:54.624673   18771 out.go:298] Setting JSON to true
	I0723 13:56:54.625450   18771 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2361,"bootTime":1721740654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 13:56:54.625504   18771 start.go:139] virtualization: kvm guest
	I0723 13:56:54.627674   18771 out.go:97] [download-only-055184] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 13:56:54.627827   18771 notify.go:220] Checking for updates...
	I0723 13:56:54.629316   18771 out.go:169] MINIKUBE_LOCATION=19319
	I0723 13:56:54.630542   18771 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 13:56:54.631832   18771 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 13:56:54.632904   18771 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:56:54.633903   18771 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0723 13:56:54.636267   18771 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 13:56:54.636446   18771 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 13:56:54.668543   18771 out.go:97] Using the kvm2 driver based on user configuration
	I0723 13:56:54.668569   18771 start.go:297] selected driver: kvm2
	I0723 13:56:54.668574   18771 start.go:901] validating driver "kvm2" against <nil>
	I0723 13:56:54.668917   18771 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:56:54.668982   18771 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 13:56:54.684475   18771 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 13:56:54.684531   18771 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 13:56:54.685124   18771 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0723 13:56:54.685333   18771 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 13:56:54.685364   18771 cni.go:84] Creating CNI manager for ""
	I0723 13:56:54.685378   18771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:56:54.685396   18771 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 13:56:54.685460   18771 start.go:340] cluster config:
	{Name:download-only-055184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-055184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 13:56:54.685577   18771 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:56:54.687409   18771 out.go:97] Starting "download-only-055184" primary control-plane node in "download-only-055184" cluster
	I0723 13:56:54.687433   18771 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 13:56:55.205235   18771 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0723 13:56:55.205274   18771 cache.go:56] Caching tarball of preloaded images
	I0723 13:56:55.205466   18771 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 13:56:55.207325   18771 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0723 13:56:55.207352   18771 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0723 13:56:55.311564   18771 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-055184 host does not exist
	  To start a cluster, run: "minikube start -p download-only-055184"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-055184
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (11.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-788360 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-788360 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.817679055s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (11.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-788360
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-788360: exit status 85 (54.696335ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-344682 | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC |                     |
	|         | -p download-only-344682             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC | 23 Jul 24 13:56 UTC |
	| delete  | -p download-only-344682             | download-only-344682 | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC | 23 Jul 24 13:56 UTC |
	| start   | -o=json --download-only             | download-only-055184 | jenkins | v1.33.1 | 23 Jul 24 13:56 UTC |                     |
	|         | -p download-only-055184             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| delete  | -p download-only-055184             | download-only-055184 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC | 23 Jul 24 13:57 UTC |
	| start   | -o=json --download-only             | download-only-788360 | jenkins | v1.33.1 | 23 Jul 24 13:57 UTC |                     |
	|         | -p download-only-788360             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 13:57:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 13:57:13.159209   18993 out.go:291] Setting OutFile to fd 1 ...
	I0723 13:57:13.159436   18993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:57:13.159444   18993 out.go:304] Setting ErrFile to fd 2...
	I0723 13:57:13.159448   18993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 13:57:13.159606   18993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 13:57:13.160147   18993 out.go:298] Setting JSON to true
	I0723 13:57:13.160926   18993 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2379,"bootTime":1721740654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 13:57:13.160981   18993 start.go:139] virtualization: kvm guest
	I0723 13:57:13.162840   18993 out.go:97] [download-only-788360] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 13:57:13.162990   18993 notify.go:220] Checking for updates...
	I0723 13:57:13.164140   18993 out.go:169] MINIKUBE_LOCATION=19319
	I0723 13:57:13.165247   18993 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 13:57:13.166265   18993 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 13:57:13.167285   18993 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 13:57:13.168357   18993 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0723 13:57:13.170460   18993 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 13:57:13.170690   18993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 13:57:13.202269   18993 out.go:97] Using the kvm2 driver based on user configuration
	I0723 13:57:13.202308   18993 start.go:297] selected driver: kvm2
	I0723 13:57:13.202314   18993 start.go:901] validating driver "kvm2" against <nil>
	I0723 13:57:13.202708   18993 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:57:13.202783   18993 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19319-11303/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0723 13:57:13.217163   18993 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0723 13:57:13.217214   18993 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 13:57:13.217673   18993 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0723 13:57:13.217820   18993 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 13:57:13.217872   18993 cni.go:84] Creating CNI manager for ""
	I0723 13:57:13.217884   18993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0723 13:57:13.217891   18993 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0723 13:57:13.217947   18993 start.go:340] cluster config:
	{Name:download-only-788360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-788360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 13:57:13.218038   18993 iso.go:125] acquiring lock: {Name:mk4b004df17d8bd7e7f5be3e4c1c583053b331d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 13:57:13.219642   18993 out.go:97] Starting "download-only-788360" primary control-plane node in "download-only-788360" cluster
	I0723 13:57:13.219662   18993 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 13:57:13.743499   18993 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0723 13:57:13.743536   18993 cache.go:56] Caching tarball of preloaded images
	I0723 13:57:13.743674   18993 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 13:57:13.745359   18993 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0723 13:57:13.745381   18993 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0723 13:57:13.843231   18993 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19319-11303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-788360 host does not exist
	  To start a cluster, run: "minikube start -p download-only-788360"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-788360
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-132421 --alsologtostderr --binary-mirror http://127.0.0.1:32931 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-132421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-132421
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (122.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-576859 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-576859 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m1.71223253s)
helpers_test.go:175: Cleaning up "offline-crio-576859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-576859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-576859: (1.058377592s)
--- PASS: TestOffline (122.77s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-566823
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-566823: exit status 85 (52.415676ms)

                                                
                                                
-- stdout --
	* Profile "addons-566823" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-566823"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-566823
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-566823: exit status 85 (53.57926ms)

                                                
                                                
-- stdout --
	* Profile "addons-566823" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-566823"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (143.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-566823 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-566823 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.474400514s)
--- PASS: TestAddons/Setup (143.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 21.590058ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-4gvbc" [191b0c30-0add-4831-9cb0-de8b776cedc3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007393898s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4b47m" [02461034-b1da-43d3-8017-4b96ba1b9c2d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004986186s
addons_test.go:342: (dbg) Run:  kubectl --context addons-566823 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-566823 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-566823 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.586838374s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 ip
2024/07/23 14:00:07 [DEBUG] GET http://192.168.39.114:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bsfbc" [458f5e31-4e85-40db-854a-033608376aa7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011192267s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-566823
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-566823: (6.097449163s)
--- PASS: TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.14s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 17.477802ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-598dj" [98da9631-ad0b-4406-b5c6-c709e679ab9d] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004421327s
addons_test.go:475: (dbg) Run:  kubectl --context addons-566823 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-566823 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.527133465s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.14s)

                                                
                                    
x
+
TestAddons/parallel/CSI (105.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 23.110451ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-566823 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-566823 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [365c4fd0-adaa-477d-b8e7-477bfcc1a7c0] Pending
helpers_test.go:344: "task-pv-pod" [365c4fd0-adaa-477d-b8e7-477bfcc1a7c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [365c4fd0-adaa-477d-b8e7-477bfcc1a7c0] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003756097s
addons_test.go:586: (dbg) Run:  kubectl --context addons-566823 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-566823 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-566823 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-566823 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-566823 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-566823 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-566823 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [77143da7-595d-4dc8-92ab-1712b2322583] Pending
helpers_test.go:344: "task-pv-pod-restore" [77143da7-595d-4dc8-92ab-1712b2322583] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [77143da7-595d-4dc8-92ab-1712b2322583] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005002889s
addons_test.go:628: (dbg) Run:  kubectl --context addons-566823 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-566823 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-566823 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-566823 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.719160676s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (105.22s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-566823 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-566823 --alsologtostderr -v=1: (1.012454165s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-f4tf7" [1198ab14-ccfe-4434-9074-5b62d0a63857] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-f4tf7" [1198ab14-ccfe-4434-9074-5b62d0a63857] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004299842s
--- PASS: TestAddons/parallel/Headlamp (14.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-sqrrs" [37ce1462-4149-44c5-aef8-06804306663b] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004146515s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-566823
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-566823 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-566823 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1cdab264-bb10-4d7d-9838-c0215117b3c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1cdab264-bb10-4d7d-9838-c0215117b3c2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1cdab264-bb10-4d7d-9838-c0215117b3c2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004611494s
addons_test.go:992: (dbg) Run:  kubectl --context addons-566823 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 ssh "cat /opt/local-path-provisioner/pvc-c8cbfc9c-f3f6-4373-91f9-dcf10e6a4265_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-566823 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-566823 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-566823 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-566823 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.308897939s)
--- PASS: TestAddons/parallel/LocalPath (55.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ntcgv" [fa2530a9-7fcd-4a19-bde9-4a8e1607e1e9] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005033845s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-566823
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-k4b7n" [51963bc7-84ef-4889-b876-8ef334e75508] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003732228s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-566823 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-566823 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (62.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-534062 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-534062 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m0.994289947s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-534062 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-534062 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-534062 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-534062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-534062
--- PASS: TestCertOptions (62.21s)

                                                
                                    
x
+
TestCertExpiration (267.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-457920 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-457920 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (51.432437255s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-457920 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-457920 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (35.124833176s)
helpers_test.go:175: Cleaning up "cert-expiration-457920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-457920
--- PASS: TestCertExpiration (267.34s)

                                                
                                    
x
+
TestForceSystemdFlag (57.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-357935 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-357935 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (56.915832507s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-357935 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-357935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-357935
--- PASS: TestForceSystemdFlag (57.87s)

                                                
                                    
x
+
TestForceSystemdEnv (41.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-661442 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-661442 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.910889876s)
helpers_test.go:175: Cleaning up "force-systemd-env-661442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-661442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-661442: (1.017643441s)
--- PASS: TestForceSystemdEnv (41.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.8s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.80s)

                                                
                                    
x
+
TestErrorSpam/setup (38.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-179594 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-179594 --driver=kvm2  --container-runtime=crio
E0723 14:09:49.700259   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:09:49.706082   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:09:49.716394   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:09:49.736733   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:09:49.777078   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-179594 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-179594 --driver=kvm2  --container-runtime=crio: (38.926485299s)
--- PASS: TestErrorSpam/setup (38.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 start --dry-run
E0723 14:09:49.857548   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 start --dry-run
E0723 14:09:50.017652   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 status
E0723 14:09:50.337881   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 pause
E0723 14:09:50.978922   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 pause
E0723 14:09:52.259730   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (4.74s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 stop
E0723 14:09:54.820125   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 stop: (1.509888675s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 stop: (2.058530134s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-179594 --log_dir /tmp/nospam-179594 stop: (1.172301088s)
--- PASS: TestErrorSpam/stop (4.74s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19319-11303/.minikube/files/etc/test/nested/copy/18503/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066448 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0723 14:09:59.940991   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:10:10.181855   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:10:30.663009   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-066448 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.706653877s)
--- PASS: TestFunctional/serial/StartWithProxy (56.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066448 --alsologtostderr -v=8
E0723 14:11:11.623294   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-066448 --alsologtostderr -v=8: (31.796532719s)
functional_test.go:659: soft start took 31.797164722s for "functional-066448" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-066448 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 cache add registry.k8s.io/pause:3.1: (1.212071497s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 cache add registry.k8s.io/pause:3.3: (1.270820558s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 cache add registry.k8s.io/pause:latest: (1.11737474s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-066448 /tmp/TestFunctionalserialCacheCmdcacheadd_local662899671/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cache add minikube-local-cache-test:functional-066448
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 cache add minikube-local-cache-test:functional-066448: (1.711212918s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cache delete minikube-local-cache-test:functional-066448
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-066448
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.083984ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 kubectl -- --context functional-066448 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-066448 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066448 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-066448 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.941205586s)
functional_test.go:757: restart took 28.941320777s for "functional-066448" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (28.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-066448 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 logs: (1.39298857s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 logs --file /tmp/TestFunctionalserialLogsFileCmd3752562079/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 logs --file /tmp/TestFunctionalserialLogsFileCmd3752562079/001/logs.txt: (1.377191831s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-066448 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-066448
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-066448: exit status 115 (270.137965ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.182:30727 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-066448 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 config get cpus: exit status 14 (43.647084ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 config get cpus: exit status 14 (42.031538ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-066448 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-066448 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28815: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-066448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.433317ms)

                                                
                                                
-- stdout --
	* [functional-066448] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:12:36.378440   28705 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:12:36.378729   28705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:36.378740   28705 out.go:304] Setting ErrFile to fd 2...
	I0723 14:12:36.378746   28705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:36.378971   28705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:12:36.379568   28705 out.go:298] Setting JSON to false
	I0723 14:12:36.380675   28705 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3302,"bootTime":1721740654,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:12:36.380751   28705 start.go:139] virtualization: kvm guest
	I0723 14:12:36.382810   28705 out.go:177] * [functional-066448] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 14:12:36.384371   28705 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:12:36.384392   28705 notify.go:220] Checking for updates...
	I0723 14:12:36.386961   28705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:12:36.388336   28705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:12:36.389701   28705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:36.391012   28705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:12:36.392290   28705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:12:36.394042   28705 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:12:36.394645   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:12:36.394703   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:12:36.410134   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39183
	I0723 14:12:36.410638   28705 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:12:36.411260   28705 main.go:141] libmachine: Using API Version  1
	I0723 14:12:36.411276   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:12:36.411602   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:12:36.411871   28705 main.go:141] libmachine: (functional-066448) Calling .DriverName
	I0723 14:12:36.412129   28705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:12:36.412491   28705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:12:36.412531   28705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:12:36.427005   28705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0723 14:12:36.427371   28705 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:12:36.427828   28705 main.go:141] libmachine: Using API Version  1
	I0723 14:12:36.427857   28705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:12:36.428173   28705 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:12:36.428367   28705 main.go:141] libmachine: (functional-066448) Calling .DriverName
	I0723 14:12:36.467685   28705 out.go:177] * Using the kvm2 driver based on existing profile
	I0723 14:12:36.469051   28705 start.go:297] selected driver: kvm2
	I0723 14:12:36.469063   28705 start.go:901] validating driver "kvm2" against &{Name:functional-066448 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-066448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:12:36.469162   28705 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:12:36.471126   28705 out.go:177] 
	W0723 14:12:36.472539   28705 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0723 14:12:36.473697   28705 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066448 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-066448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.887428ms)

                                                
                                                
-- stdout --
	* [functional-066448] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:12:26.850132   27946 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:12:26.850256   27946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:26.850266   27946 out.go:304] Setting ErrFile to fd 2...
	I0723 14:12:26.850270   27946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:12:26.850593   27946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:12:26.851089   27946 out.go:298] Setting JSON to false
	I0723 14:12:26.851922   27946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3293,"bootTime":1721740654,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 14:12:26.851975   27946 start.go:139] virtualization: kvm guest
	I0723 14:12:26.854237   27946 out.go:177] * [functional-066448] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0723 14:12:26.855652   27946 notify.go:220] Checking for updates...
	I0723 14:12:26.855658   27946 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:12:26.857164   27946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:12:26.858638   27946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 14:12:26.860006   27946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 14:12:26.861338   27946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 14:12:26.862809   27946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:12:26.864622   27946 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:12:26.865243   27946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:12:26.865320   27946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:12:26.879856   27946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0723 14:12:26.880294   27946 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:12:26.880814   27946 main.go:141] libmachine: Using API Version  1
	I0723 14:12:26.880837   27946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:12:26.881170   27946 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:12:26.881352   27946 main.go:141] libmachine: (functional-066448) Calling .DriverName
	I0723 14:12:26.881616   27946 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:12:26.881929   27946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:12:26.881979   27946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:12:26.895973   27946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46085
	I0723 14:12:26.896421   27946 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:12:26.896918   27946 main.go:141] libmachine: Using API Version  1
	I0723 14:12:26.896939   27946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:12:26.897244   27946 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:12:26.897404   27946 main.go:141] libmachine: (functional-066448) Calling .DriverName
	I0723 14:12:26.929787   27946 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0723 14:12:26.931103   27946 start.go:297] selected driver: kvm2
	I0723 14:12:26.931129   27946 start.go:901] validating driver "kvm2" against &{Name:functional-066448 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-066448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:12:26.931298   27946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:12:26.933526   27946 out.go:177] 
	W0723 14:12:26.934941   27946 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0723 14:12:26.936328   27946 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-066448 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-066448 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-kgtvk" [4a1f5f4c-aa7c-4c77-ab60-2c340cf52a98] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-kgtvk" [4a1f5f4c-aa7c-4c77-ab60-2c340cf52a98] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.007319482s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.182:32371
functional_test.go:1671: http://192.168.39.182:32371: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-kgtvk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.182:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.182:32371
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9f939a5b-83fe-4006-adfa-fe1691aa8f10] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004355682s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-066448 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-066448 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-066448 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-066448 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-066448 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f117a5bb-043d-4818-80b0-ebc5474f9b5d] Pending
helpers_test.go:344: "sp-pod" [f117a5bb-043d-4818-80b0-ebc5474f9b5d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f117a5bb-043d-4818-80b0-ebc5474f9b5d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004268672s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-066448 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-066448 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-066448 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [45de2b76-d591-4fa0-bb32-817a9de3a679] Pending
helpers_test.go:344: "sp-pod" [45de2b76-d591-4fa0-bb32-817a9de3a679] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [45de2b76-d591-4fa0-bb32-817a9de3a679] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003617594s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-066448 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh -n functional-066448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cp functional-066448:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3123195796/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh -n functional-066448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh -n functional-066448 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-066448 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-m7fb4" [37cd300f-7d90-427c-90c3-5a2982037748] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-m7fb4" [37cd300f-7d90-427c-90c3-5a2982037748] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003789601s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-066448 exec mysql-64454c8b5c-m7fb4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-066448 exec mysql-64454c8b5c-m7fb4 -- mysql -ppassword -e "show databases;": exit status 1 (203.484573ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-066448 exec mysql-64454c8b5c-m7fb4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18503/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo cat /etc/test/nested/copy/18503/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18503.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo cat /etc/ssl/certs/18503.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18503.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo cat /usr/share/ca-certificates/18503.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/185032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo cat /etc/ssl/certs/185032.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/185032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo cat /usr/share/ca-certificates/185032.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-066448 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 ssh "sudo systemctl is-active docker": exit status 1 (213.203161ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 ssh "sudo systemctl is-active containerd": exit status 1 (201.524349ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066448 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-066448
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-066448
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066448 image ls --format short --alsologtostderr:
I0723 14:12:47.260148   29130 out.go:291] Setting OutFile to fd 1 ...
I0723 14:12:47.260256   29130 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.260264   29130 out.go:304] Setting ErrFile to fd 2...
I0723 14:12:47.260269   29130 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.260439   29130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
I0723 14:12:47.260972   29130 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.261064   29130 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.261458   29130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.261510   29130 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.277575   29130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
I0723 14:12:47.278129   29130 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.278773   29130 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.278795   29130 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.279137   29130 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.279356   29130 main.go:141] libmachine: (functional-066448) Calling .GetState
I0723 14:12:47.281281   29130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.281321   29130 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.296370   29130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
I0723 14:12:47.296814   29130 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.297313   29130 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.297346   29130 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.297710   29130 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.297882   29130 main.go:141] libmachine: (functional-066448) Calling .DriverName
I0723 14:12:47.298067   29130 ssh_runner.go:195] Run: systemctl --version
I0723 14:12:47.298093   29130 main.go:141] libmachine: (functional-066448) Calling .GetSSHHostname
I0723 14:12:47.300832   29130 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.301188   29130 main.go:141] libmachine: (functional-066448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:b6:dd", ip: ""} in network mk-functional-066448: {Iface:virbr1 ExpiryTime:2024-07-23 15:10:12 +0000 UTC Type:0 Mac:52:54:00:ff:b6:dd Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-066448 Clientid:01:52:54:00:ff:b6:dd}
I0723 14:12:47.301222   29130 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined IP address 192.168.39.182 and MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.301299   29130 main.go:141] libmachine: (functional-066448) Calling .GetSSHPort
I0723 14:12:47.301461   29130 main.go:141] libmachine: (functional-066448) Calling .GetSSHKeyPath
I0723 14:12:47.301619   29130 main.go:141] libmachine: (functional-066448) Calling .GetSSHUsername
I0723 14:12:47.301756   29130 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/functional-066448/id_rsa Username:docker}
I0723 14:12:47.388835   29130 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 14:12:47.423131   29130 main.go:141] libmachine: Making call to close driver server
I0723 14:12:47.423148   29130 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:47.423424   29130 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:47.423458   29130 main.go:141] libmachine: Making call to close connection to plugin binary
I0723 14:12:47.423481   29130 main.go:141] libmachine: Making call to close driver server
I0723 14:12:47.423493   29130 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:47.423541   29130 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
I0723 14:12:47.423722   29130 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
I0723 14:12:47.423768   29130 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:47.423781   29130 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066448 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kicbase/echo-server           | functional-066448  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| localhost/minikube-local-cache-test     | functional-066448  | 513240f22ff47 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066448 image ls --format table --alsologtostderr:
I0723 14:12:48.095940   29326 out.go:291] Setting OutFile to fd 1 ...
I0723 14:12:48.096189   29326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:48.096199   29326 out.go:304] Setting ErrFile to fd 2...
I0723 14:12:48.096203   29326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:48.096411   29326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
I0723 14:12:48.096931   29326 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:48.097024   29326 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:48.097392   29326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:48.097436   29326 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:48.112198   29326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
I0723 14:12:48.112602   29326 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:48.113141   29326 main.go:141] libmachine: Using API Version  1
I0723 14:12:48.113162   29326 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:48.113496   29326 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:48.113699   29326 main.go:141] libmachine: (functional-066448) Calling .GetState
I0723 14:12:48.115385   29326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:48.115419   29326 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:48.130005   29326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
I0723 14:12:48.130503   29326 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:48.130959   29326 main.go:141] libmachine: Using API Version  1
I0723 14:12:48.130978   29326 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:48.131300   29326 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:48.131509   29326 main.go:141] libmachine: (functional-066448) Calling .DriverName
I0723 14:12:48.131714   29326 ssh_runner.go:195] Run: systemctl --version
I0723 14:12:48.131736   29326 main.go:141] libmachine: (functional-066448) Calling .GetSSHHostname
I0723 14:12:48.134459   29326 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:48.134929   29326 main.go:141] libmachine: (functional-066448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:b6:dd", ip: ""} in network mk-functional-066448: {Iface:virbr1 ExpiryTime:2024-07-23 15:10:12 +0000 UTC Type:0 Mac:52:54:00:ff:b6:dd Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-066448 Clientid:01:52:54:00:ff:b6:dd}
I0723 14:12:48.134956   29326 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined IP address 192.168.39.182 and MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:48.135103   29326 main.go:141] libmachine: (functional-066448) Calling .GetSSHPort
I0723 14:12:48.135266   29326 main.go:141] libmachine: (functional-066448) Calling .GetSSHKeyPath
I0723 14:12:48.135426   29326 main.go:141] libmachine: (functional-066448) Calling .GetSSHUsername
I0723 14:12:48.135579   29326 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/functional-066448/id_rsa Username:docker}
I0723 14:12:48.253284   29326 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 14:12:48.316948   29326 main.go:141] libmachine: Making call to close driver server
I0723 14:12:48.316970   29326 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:48.317240   29326 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:48.317256   29326 main.go:141] libmachine: Making call to close connection to plugin binary
I0723 14:12:48.317264   29326 main.go:141] libmachine: Making call to close driver server
I0723 14:12:48.317272   29326 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:48.317281   29326 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
I0723 14:12:48.317521   29326 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
I0723 14:12:48.317545   29326 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:48.317573   29326 main.go:141] libmachine: Making call to close connection to plugin binary
2024/07/23 14:12:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066448 image ls --format json --alsologtostderr:
[{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"513240f22ff474d3f4760024b4cd12d727f5c828ba03930237cf98ddef4d318b","repoDigests":["localhost/minikube-local-cache-test@sha256:b2d1fa5e90caef71803576e4fd1264fc4c276d81cae7dfd05d3db7b34dc97d99"],"repoTags":["localhost/minikube-local-cache-test:functional-066448"],"size":"3330"},{"id":"82e4c8a
736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae6829615
0078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-066448"],"size":"4943877"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd
277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha2
56:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a
36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","d
ocker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066448 image ls --format json --alsologtostderr:
I0723 14:12:47.872438   29278 out.go:291] Setting OutFile to fd 1 ...
I0723 14:12:47.872554   29278 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.872564   29278 out.go:304] Setting ErrFile to fd 2...
I0723 14:12:47.872570   29278 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.872751   29278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
I0723 14:12:47.873316   29278 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.873432   29278 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.873826   29278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.873874   29278 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.890918   29278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
I0723 14:12:47.891480   29278 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.892096   29278 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.892114   29278 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.892533   29278 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.892731   29278 main.go:141] libmachine: (functional-066448) Calling .GetState
I0723 14:12:47.894584   29278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.894624   29278 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.909551   29278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34917
I0723 14:12:47.909919   29278 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.910480   29278 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.910504   29278 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.910796   29278 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.910963   29278 main.go:141] libmachine: (functional-066448) Calling .DriverName
I0723 14:12:47.911138   29278 ssh_runner.go:195] Run: systemctl --version
I0723 14:12:47.911159   29278 main.go:141] libmachine: (functional-066448) Calling .GetSSHHostname
I0723 14:12:47.913961   29278 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.914411   29278 main.go:141] libmachine: (functional-066448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:b6:dd", ip: ""} in network mk-functional-066448: {Iface:virbr1 ExpiryTime:2024-07-23 15:10:12 +0000 UTC Type:0 Mac:52:54:00:ff:b6:dd Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-066448 Clientid:01:52:54:00:ff:b6:dd}
I0723 14:12:47.914443   29278 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined IP address 192.168.39.182 and MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.914570   29278 main.go:141] libmachine: (functional-066448) Calling .GetSSHPort
I0723 14:12:47.914717   29278 main.go:141] libmachine: (functional-066448) Calling .GetSSHKeyPath
I0723 14:12:47.914866   29278 main.go:141] libmachine: (functional-066448) Calling .GetSSHUsername
I0723 14:12:47.914995   29278 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/functional-066448/id_rsa Username:docker}
I0723 14:12:48.005163   29278 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 14:12:48.049192   29278 main.go:141] libmachine: Making call to close driver server
I0723 14:12:48.049205   29278 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:48.049635   29278 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:48.049679   29278 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
I0723 14:12:48.049681   29278 main.go:141] libmachine: Making call to close connection to plugin binary
I0723 14:12:48.049737   29278 main.go:141] libmachine: Making call to close driver server
I0723 14:12:48.049753   29278 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:48.050009   29278 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:48.050024   29278 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066448 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 513240f22ff474d3f4760024b4cd12d727f5c828ba03930237cf98ddef4d318b
repoDigests:
- localhost/minikube-local-cache-test@sha256:b2d1fa5e90caef71803576e4fd1264fc4c276d81cae7dfd05d3db7b34dc97d99
repoTags:
- localhost/minikube-local-cache-test:functional-066448
size: "3330"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-066448
size: "4943877"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066448 image ls --format yaml --alsologtostderr:
I0723 14:12:47.473761   29176 out.go:291] Setting OutFile to fd 1 ...
I0723 14:12:47.473863   29176 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.473873   29176 out.go:304] Setting ErrFile to fd 2...
I0723 14:12:47.473877   29176 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.474072   29176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
I0723 14:12:47.474638   29176 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.474760   29176 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.475176   29176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.475214   29176 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.490474   29176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
I0723 14:12:47.491022   29176 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.491613   29176 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.491640   29176 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.491963   29176 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.492131   29176 main.go:141] libmachine: (functional-066448) Calling .GetState
I0723 14:12:47.494002   29176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.494046   29176 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.509012   29176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
I0723 14:12:47.509367   29176 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.509828   29176 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.509856   29176 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.510168   29176 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.510333   29176 main.go:141] libmachine: (functional-066448) Calling .DriverName
I0723 14:12:47.510559   29176 ssh_runner.go:195] Run: systemctl --version
I0723 14:12:47.510579   29176 main.go:141] libmachine: (functional-066448) Calling .GetSSHHostname
I0723 14:12:47.513662   29176 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.514011   29176 main.go:141] libmachine: (functional-066448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:b6:dd", ip: ""} in network mk-functional-066448: {Iface:virbr1 ExpiryTime:2024-07-23 15:10:12 +0000 UTC Type:0 Mac:52:54:00:ff:b6:dd Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-066448 Clientid:01:52:54:00:ff:b6:dd}
I0723 14:12:47.514036   29176 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined IP address 192.168.39.182 and MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.514142   29176 main.go:141] libmachine: (functional-066448) Calling .GetSSHPort
I0723 14:12:47.514279   29176 main.go:141] libmachine: (functional-066448) Calling .GetSSHKeyPath
I0723 14:12:47.514374   29176 main.go:141] libmachine: (functional-066448) Calling .GetSSHUsername
I0723 14:12:47.514535   29176 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/functional-066448/id_rsa Username:docker}
I0723 14:12:47.597031   29176 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 14:12:47.644936   29176 main.go:141] libmachine: Making call to close driver server
I0723 14:12:47.644952   29176 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:47.645279   29176 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:47.645318   29176 main.go:141] libmachine: Making call to close connection to plugin binary
I0723 14:12:47.645327   29176 main.go:141] libmachine: Making call to close driver server
I0723 14:12:47.645344   29176 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:47.645553   29176 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:47.645566   29176 main.go:141] libmachine: Making call to close connection to plugin binary
I0723 14:12:47.645592   29176 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 ssh pgrep buildkitd: exit status 1 (193.011482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image build -t localhost/my-image:functional-066448 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 image build -t localhost/my-image:functional-066448 testdata/build --alsologtostderr: (3.09096433s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066448 image build -t localhost/my-image:functional-066448 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f241c0e7b20
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-066448
--> 9fc3d85ac34
Successfully tagged localhost/my-image:functional-066448
9fc3d85ac34634ec3675420d183805b458c4a46557f6e3434d482f284cbe91cb
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066448 image build -t localhost/my-image:functional-066448 testdata/build --alsologtostderr:
I0723 14:12:47.886166   29289 out.go:291] Setting OutFile to fd 1 ...
I0723 14:12:47.886513   29289 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.886527   29289 out.go:304] Setting ErrFile to fd 2...
I0723 14:12:47.886533   29289 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:12:47.886777   29289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
I0723 14:12:47.887357   29289 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.887837   29289 config.go:182] Loaded profile config "functional-066448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:12:47.888164   29289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.888199   29289 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.902485   29289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42867
I0723 14:12:47.902842   29289 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.903332   29289 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.903356   29289 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.903693   29289 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.903875   29289 main.go:141] libmachine: (functional-066448) Calling .GetState
I0723 14:12:47.905939   29289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0723 14:12:47.905992   29289 main.go:141] libmachine: Launching plugin server for driver kvm2
I0723 14:12:47.922295   29289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
I0723 14:12:47.922658   29289 main.go:141] libmachine: () Calling .GetVersion
I0723 14:12:47.923133   29289 main.go:141] libmachine: Using API Version  1
I0723 14:12:47.923151   29289 main.go:141] libmachine: () Calling .SetConfigRaw
I0723 14:12:47.923478   29289 main.go:141] libmachine: () Calling .GetMachineName
I0723 14:12:47.923671   29289 main.go:141] libmachine: (functional-066448) Calling .DriverName
I0723 14:12:47.923888   29289 ssh_runner.go:195] Run: systemctl --version
I0723 14:12:47.923916   29289 main.go:141] libmachine: (functional-066448) Calling .GetSSHHostname
I0723 14:12:47.926363   29289 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.926769   29289 main.go:141] libmachine: (functional-066448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:b6:dd", ip: ""} in network mk-functional-066448: {Iface:virbr1 ExpiryTime:2024-07-23 15:10:12 +0000 UTC Type:0 Mac:52:54:00:ff:b6:dd Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-066448 Clientid:01:52:54:00:ff:b6:dd}
I0723 14:12:47.926798   29289 main.go:141] libmachine: (functional-066448) DBG | domain functional-066448 has defined IP address 192.168.39.182 and MAC address 52:54:00:ff:b6:dd in network mk-functional-066448
I0723 14:12:47.926913   29289 main.go:141] libmachine: (functional-066448) Calling .GetSSHPort
I0723 14:12:47.927075   29289 main.go:141] libmachine: (functional-066448) Calling .GetSSHKeyPath
I0723 14:12:47.927227   29289 main.go:141] libmachine: (functional-066448) Calling .GetSSHUsername
I0723 14:12:47.927372   29289 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/functional-066448/id_rsa Username:docker}
I0723 14:12:48.013334   29289 build_images.go:161] Building image from path: /tmp/build.1806684929.tar
I0723 14:12:48.013398   29289 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0723 14:12:48.024916   29289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1806684929.tar
I0723 14:12:48.029928   29289 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1806684929.tar: stat -c "%s %y" /var/lib/minikube/build/build.1806684929.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1806684929.tar': No such file or directory
I0723 14:12:48.029956   29289 ssh_runner.go:362] scp /tmp/build.1806684929.tar --> /var/lib/minikube/build/build.1806684929.tar (3072 bytes)
I0723 14:12:48.081527   29289 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1806684929
I0723 14:12:48.103589   29289 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1806684929 -xf /var/lib/minikube/build/build.1806684929.tar
I0723 14:12:48.118289   29289 crio.go:315] Building image: /var/lib/minikube/build/build.1806684929
I0723 14:12:48.118343   29289 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-066448 /var/lib/minikube/build/build.1806684929 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0723 14:12:50.906635   29289 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-066448 /var/lib/minikube/build/build.1806684929 --cgroup-manager=cgroupfs: (2.788266385s)
I0723 14:12:50.906698   29289 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1806684929
I0723 14:12:50.920492   29289 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1806684929.tar
I0723 14:12:50.930117   29289 build_images.go:217] Built localhost/my-image:functional-066448 from /tmp/build.1806684929.tar
I0723 14:12:50.930168   29289 build_images.go:133] succeeded building to: functional-066448
I0723 14:12:50.930175   29289 build_images.go:134] failed building to: 
I0723 14:12:50.930219   29289 main.go:141] libmachine: Making call to close driver server
I0723 14:12:50.930237   29289 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:50.930504   29289 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:50.930527   29289 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
I0723 14:12:50.930530   29289 main.go:141] libmachine: Making call to close connection to plugin binary
I0723 14:12:50.930549   29289 main.go:141] libmachine: Making call to close driver server
I0723 14:12:50.930560   29289 main.go:141] libmachine: (functional-066448) Calling .Close
I0723 14:12:50.930784   29289 main.go:141] libmachine: (functional-066448) DBG | Closing plugin on server side
I0723 14:12:50.930819   29289 main.go:141] libmachine: Successfully made call to close driver server
I0723 14:12:50.930842   29289 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.769045161s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-066448
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image load --daemon docker.io/kicbase/echo-server:functional-066448 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 image load --daemon docker.io/kicbase/echo-server:functional-066448 --alsologtostderr: (1.071080285s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdany-port1198782672/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721743933343098402" to /tmp/TestFunctionalparallelMountCmdany-port1198782672/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721743933343098402" to /tmp/TestFunctionalparallelMountCmdany-port1198782672/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721743933343098402" to /tmp/TestFunctionalparallelMountCmdany-port1198782672/001/test-1721743933343098402
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.133935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 23 14:12 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 23 14:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 23 14:12 test-1721743933343098402
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh cat /mount-9p/test-1721743933343098402
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-066448 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a722cdce-91c1-437c-9367-8db0979dc69f] Pending
helpers_test.go:344: "busybox-mount" [a722cdce-91c1-437c-9367-8db0979dc69f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a722cdce-91c1-437c-9367-8db0979dc69f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a722cdce-91c1-437c-9367-8db0979dc69f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.003627299s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-066448 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdany-port1198782672/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image load --daemon docker.io/kicbase/echo-server:functional-066448 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-066448
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image load --daemon docker.io/kicbase/echo-server:functional-066448 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image save docker.io/kicbase/echo-server:functional-066448 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 image save docker.io/kicbase/echo-server:functional-066448 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.09402152s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image rm docker.io/kicbase/echo-server:functional-066448 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-066448
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 image save --daemon docker.io/kicbase/echo-server:functional-066448 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-066448
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdspecific-port3326444967/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (222.119129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T /mount-9p | grep 9p"
E0723 14:12:33.543872   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdspecific-port3326444967/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066448 ssh "sudo umount -f /mount-9p": exit status 1 (238.77683ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-066448 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdspecific-port3326444967/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-066448 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-066448 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-7gpmw" [a2adc817-266f-41b6-a803-735c666ce9d6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-7gpmw" [a2adc817-266f-41b6-a803-735c666ce9d6] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005154501s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1755543574/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1755543574/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1755543574/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-066448 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1755543574/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1755543574/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1755543574/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "274.744604ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "49.222685ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "275.949498ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "44.48178ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 service list: (1.289095991s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-066448 service list -o json: (1.242774427s)
functional_test.go:1490: Took "1.242909612s" to run "out/minikube-linux-amd64 -p functional-066448 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.182:31606
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-066448 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.182:31606
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-066448
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-066448
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-066448
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (227.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-533645 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0723 14:14:49.700182   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 14:15:17.384848   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-533645 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m46.736791334s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (227.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-533645 -- rollout status deployment/busybox: (4.410335357s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-cd87c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-kq2ww -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-tlvlp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-cd87c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-kq2ww -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-tlvlp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-cd87c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-kq2ww -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-tlvlp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-cd87c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-cd87c -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-kq2ww -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-kq2ww -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-tlvlp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-533645 -- exec busybox-fc5497c4f-tlvlp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-533645 -v=7 --alsologtostderr
E0723 14:17:11.819039   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:11.824345   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:11.834600   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:11.854907   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:11.895201   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:11.975508   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:12.135902   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:12.456835   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:13.097811   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:14.378348   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:16.938654   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:22.059061   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:17:32.299586   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-533645 -v=7 --alsologtostderr: (58.84725387s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
E0723 14:17:52.779732   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-533645 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp testdata/cp-test.txt ha-533645:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645:/home/docker/cp-test.txt ha-533645-m02:/home/docker/cp-test_ha-533645_ha-533645-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test_ha-533645_ha-533645-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645:/home/docker/cp-test.txt ha-533645-m03:/home/docker/cp-test_ha-533645_ha-533645-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test_ha-533645_ha-533645-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645:/home/docker/cp-test.txt ha-533645-m04:/home/docker/cp-test_ha-533645_ha-533645-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test_ha-533645_ha-533645-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp testdata/cp-test.txt ha-533645-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m02:/home/docker/cp-test.txt ha-533645:/home/docker/cp-test_ha-533645-m02_ha-533645.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test_ha-533645-m02_ha-533645.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m02:/home/docker/cp-test.txt ha-533645-m03:/home/docker/cp-test_ha-533645-m02_ha-533645-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test_ha-533645-m02_ha-533645-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m02:/home/docker/cp-test.txt ha-533645-m04:/home/docker/cp-test_ha-533645-m02_ha-533645-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test_ha-533645-m02_ha-533645-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp testdata/cp-test.txt ha-533645-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt ha-533645:/home/docker/cp-test_ha-533645-m03_ha-533645.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test_ha-533645-m03_ha-533645.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt ha-533645-m02:/home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test_ha-533645-m03_ha-533645-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m03:/home/docker/cp-test.txt ha-533645-m04:/home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test_ha-533645-m03_ha-533645-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp testdata/cp-test.txt ha-533645-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile811988388/001/cp-test_ha-533645-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt ha-533645:/home/docker/cp-test_ha-533645-m04_ha-533645.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645 "sudo cat /home/docker/cp-test_ha-533645-m04_ha-533645.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt ha-533645-m02:/home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m02 "sudo cat /home/docker/cp-test_ha-533645-m04_ha-533645-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 cp ha-533645-m04:/home/docker/cp-test.txt ha-533645-m03:/home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 ssh -n ha-533645-m03 "sudo cat /home/docker/cp-test_ha-533645-m04_ha-533645-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.452999039s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-533645 node delete m03 -v=7 --alsologtostderr: (16.447087261s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (314.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-533645 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0723 14:32:11.818558   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:33:34.863078   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:34:49.699633   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-533645 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m13.548472743s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (314.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-533645 --control-plane -v=7 --alsologtostderr
E0723 14:37:11.819287   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-533645 --control-plane -v=7 --alsologtostderr: (1m16.625919461s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-533645 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-629561 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-629561 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.10597221s)
--- PASS: TestJSONOutput/start/Command (96.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-629561 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-629561 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-629561 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-629561 --output=json --user=testUser: (7.333604136s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-122855 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-122855 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.143478ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ae0aa268-b50f-49bd-b792-acf87afe3033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-122855] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f17cae80-517d-4d89-8a4d-91c8c3b728a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19319"}}
	{"specversion":"1.0","id":"e291abb6-18ce-43dc-b15e-cd7b3b0b7df5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f80d2b4-7937-497a-bcf9-207ee98465e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig"}}
	{"specversion":"1.0","id":"7bd73d4e-c757-4574-b7b2-fa2edce3d8d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube"}}
	{"specversion":"1.0","id":"8edaae30-38bd-47b5-ac4a-3ae0ac0065d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"231b9ed0-4f57-4896-80ab-4da1f48d3ccd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"32fe2895-d362-42a3-8e67-e76262d5eb75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-122855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-122855
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-769945 --driver=kvm2  --container-runtime=crio
E0723 14:39:49.700297   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-769945 --driver=kvm2  --container-runtime=crio: (42.876069788s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-772656 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-772656 --driver=kvm2  --container-runtime=crio: (41.78295943s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-769945
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-772656
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-772656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-772656
helpers_test.go:175: Cleaning up "first-769945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-769945
--- PASS: TestMinikubeProfile (87.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-110007 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-110007 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.41269575s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-110007 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-110007 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-122364 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-122364 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.315232831s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122364 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122364 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-110007 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122364 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122364 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-122364
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-122364: (1.276815256s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.64s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-122364
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-122364: (22.643129595s)
--- PASS: TestMountStart/serial/RestartStopped (23.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122364 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-122364 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-574866 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0723 14:42:11.818904   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 14:42:52.746986   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-574866 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.253651534s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-574866 -- rollout status deployment/busybox: (3.818876827s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-5g296 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-q96vx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-5g296 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-q96vx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-5g296 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-q96vx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-5g296 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-5g296 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-q96vx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-574866 -- exec busybox-fc5497c4f-q96vx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-574866 -v 3 --alsologtostderr
E0723 14:44:49.700322   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-574866 -v 3 --alsologtostderr: (46.88227028s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-574866 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp testdata/cp-test.txt multinode-574866:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418850268/001/cp-test_multinode-574866.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866:/home/docker/cp-test.txt multinode-574866-m02:/home/docker/cp-test_multinode-574866_multinode-574866-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m02 "sudo cat /home/docker/cp-test_multinode-574866_multinode-574866-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866:/home/docker/cp-test.txt multinode-574866-m03:/home/docker/cp-test_multinode-574866_multinode-574866-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m03 "sudo cat /home/docker/cp-test_multinode-574866_multinode-574866-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp testdata/cp-test.txt multinode-574866-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418850268/001/cp-test_multinode-574866-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt multinode-574866:/home/docker/cp-test_multinode-574866-m02_multinode-574866.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866 "sudo cat /home/docker/cp-test_multinode-574866-m02_multinode-574866.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866-m02:/home/docker/cp-test.txt multinode-574866-m03:/home/docker/cp-test_multinode-574866-m02_multinode-574866-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m03 "sudo cat /home/docker/cp-test_multinode-574866-m02_multinode-574866-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp testdata/cp-test.txt multinode-574866-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418850268/001/cp-test_multinode-574866-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt multinode-574866:/home/docker/cp-test_multinode-574866-m03_multinode-574866.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866 "sudo cat /home/docker/cp-test_multinode-574866-m03_multinode-574866.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 cp multinode-574866-m03:/home/docker/cp-test.txt multinode-574866-m02:/home/docker/cp-test_multinode-574866-m03_multinode-574866-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 ssh -n multinode-574866-m02 "sudo cat /home/docker/cp-test_multinode-574866-m03_multinode-574866-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-574866 node stop m03: (1.43930288s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-574866 status: exit status 7 (411.93276ms)

                                                
                                                
-- stdout --
	multinode-574866
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-574866-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-574866-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-574866 status --alsologtostderr: exit status 7 (406.259052ms)

                                                
                                                
-- stdout --
	multinode-574866
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-574866-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-574866-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:45:01.364986   47335 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:45:01.365108   47335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:45:01.365120   47335 out.go:304] Setting ErrFile to fd 2...
	I0723 14:45:01.365124   47335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:45:01.365370   47335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 14:45:01.365571   47335 out.go:298] Setting JSON to false
	I0723 14:45:01.365601   47335 mustload.go:65] Loading cluster: multinode-574866
	I0723 14:45:01.365638   47335 notify.go:220] Checking for updates...
	I0723 14:45:01.366101   47335 config.go:182] Loaded profile config "multinode-574866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:45:01.366122   47335 status.go:255] checking status of multinode-574866 ...
	I0723 14:45:01.366650   47335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:45:01.366699   47335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:45:01.386421   47335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45475
	I0723 14:45:01.386939   47335 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:45:01.387473   47335 main.go:141] libmachine: Using API Version  1
	I0723 14:45:01.387496   47335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:45:01.387826   47335 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:45:01.388005   47335 main.go:141] libmachine: (multinode-574866) Calling .GetState
	I0723 14:45:01.389526   47335 status.go:330] multinode-574866 host status = "Running" (err=<nil>)
	I0723 14:45:01.389541   47335 host.go:66] Checking if "multinode-574866" exists ...
	I0723 14:45:01.389932   47335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:45:01.389983   47335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:45:01.404924   47335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0723 14:45:01.405410   47335 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:45:01.405854   47335 main.go:141] libmachine: Using API Version  1
	I0723 14:45:01.405871   47335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:45:01.406124   47335 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:45:01.406324   47335 main.go:141] libmachine: (multinode-574866) Calling .GetIP
	I0723 14:45:01.409037   47335 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:45:01.409493   47335 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:45:01.409522   47335 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:45:01.409648   47335 host.go:66] Checking if "multinode-574866" exists ...
	I0723 14:45:01.409921   47335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:45:01.409961   47335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:45:01.425133   47335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0723 14:45:01.425608   47335 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:45:01.426271   47335 main.go:141] libmachine: Using API Version  1
	I0723 14:45:01.426298   47335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:45:01.426671   47335 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:45:01.426833   47335 main.go:141] libmachine: (multinode-574866) Calling .DriverName
	I0723 14:45:01.427021   47335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:45:01.427043   47335 main.go:141] libmachine: (multinode-574866) Calling .GetSSHHostname
	I0723 14:45:01.429798   47335 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:45:01.430232   47335 main.go:141] libmachine: (multinode-574866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:5c", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:42:13 +0000 UTC Type:0 Mac:52:54:00:ae:b0:5c Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-574866 Clientid:01:52:54:00:ae:b0:5c}
	I0723 14:45:01.430283   47335 main.go:141] libmachine: (multinode-574866) DBG | domain multinode-574866 has defined IP address 192.168.39.146 and MAC address 52:54:00:ae:b0:5c in network mk-multinode-574866
	I0723 14:45:01.430444   47335 main.go:141] libmachine: (multinode-574866) Calling .GetSSHPort
	I0723 14:45:01.430625   47335 main.go:141] libmachine: (multinode-574866) Calling .GetSSHKeyPath
	I0723 14:45:01.430785   47335 main.go:141] libmachine: (multinode-574866) Calling .GetSSHUsername
	I0723 14:45:01.430936   47335 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866/id_rsa Username:docker}
	I0723 14:45:01.509354   47335 ssh_runner.go:195] Run: systemctl --version
	I0723 14:45:01.516146   47335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:45:01.531244   47335 kubeconfig.go:125] found "multinode-574866" server: "https://192.168.39.146:8443"
	I0723 14:45:01.531275   47335 api_server.go:166] Checking apiserver status ...
	I0723 14:45:01.531333   47335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:45:01.543808   47335 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1200/cgroup
	W0723 14:45:01.552583   47335 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1200/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0723 14:45:01.552652   47335 ssh_runner.go:195] Run: ls
	I0723 14:45:01.556559   47335 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0723 14:45:01.560599   47335 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0723 14:45:01.560618   47335 status.go:422] multinode-574866 apiserver status = Running (err=<nil>)
	I0723 14:45:01.560628   47335 status.go:257] multinode-574866 status: &{Name:multinode-574866 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:45:01.560641   47335 status.go:255] checking status of multinode-574866-m02 ...
	I0723 14:45:01.560950   47335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:45:01.560992   47335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:45:01.575975   47335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0723 14:45:01.576320   47335 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:45:01.576823   47335 main.go:141] libmachine: Using API Version  1
	I0723 14:45:01.576842   47335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:45:01.577147   47335 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:45:01.577372   47335 main.go:141] libmachine: (multinode-574866-m02) Calling .GetState
	I0723 14:45:01.579009   47335 status.go:330] multinode-574866-m02 host status = "Running" (err=<nil>)
	I0723 14:45:01.579024   47335 host.go:66] Checking if "multinode-574866-m02" exists ...
	I0723 14:45:01.579314   47335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:45:01.579362   47335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:45:01.593854   47335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37585
	I0723 14:45:01.594301   47335 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:45:01.594755   47335 main.go:141] libmachine: Using API Version  1
	I0723 14:45:01.594774   47335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:45:01.595051   47335 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:45:01.595240   47335 main.go:141] libmachine: (multinode-574866-m02) Calling .GetIP
	I0723 14:45:01.598008   47335 main.go:141] libmachine: (multinode-574866-m02) DBG | domain multinode-574866-m02 has defined MAC address 52:54:00:a8:d4:27 in network mk-multinode-574866
	I0723 14:45:01.598464   47335 main.go:141] libmachine: (multinode-574866-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:d4:27", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:43:25 +0000 UTC Type:0 Mac:52:54:00:a8:d4:27 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:multinode-574866-m02 Clientid:01:52:54:00:a8:d4:27}
	I0723 14:45:01.598507   47335 main.go:141] libmachine: (multinode-574866-m02) DBG | domain multinode-574866-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:a8:d4:27 in network mk-multinode-574866
	I0723 14:45:01.598636   47335 host.go:66] Checking if "multinode-574866-m02" exists ...
	I0723 14:45:01.598925   47335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:45:01.598969   47335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:45:01.613694   47335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45133
	I0723 14:45:01.614198   47335 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:45:01.614719   47335 main.go:141] libmachine: Using API Version  1
	I0723 14:45:01.614746   47335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:45:01.615029   47335 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:45:01.615202   47335 main.go:141] libmachine: (multinode-574866-m02) Calling .DriverName
	I0723 14:45:01.615409   47335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:45:01.615440   47335 main.go:141] libmachine: (multinode-574866-m02) Calling .GetSSHHostname
	I0723 14:45:01.617922   47335 main.go:141] libmachine: (multinode-574866-m02) DBG | domain multinode-574866-m02 has defined MAC address 52:54:00:a8:d4:27 in network mk-multinode-574866
	I0723 14:45:01.618256   47335 main.go:141] libmachine: (multinode-574866-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:d4:27", ip: ""} in network mk-multinode-574866: {Iface:virbr1 ExpiryTime:2024-07-23 15:43:25 +0000 UTC Type:0 Mac:52:54:00:a8:d4:27 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:multinode-574866-m02 Clientid:01:52:54:00:a8:d4:27}
	I0723 14:45:01.618275   47335 main.go:141] libmachine: (multinode-574866-m02) DBG | domain multinode-574866-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:a8:d4:27 in network mk-multinode-574866
	I0723 14:45:01.618437   47335 main.go:141] libmachine: (multinode-574866-m02) Calling .GetSSHPort
	I0723 14:45:01.618609   47335 main.go:141] libmachine: (multinode-574866-m02) Calling .GetSSHKeyPath
	I0723 14:45:01.618756   47335 main.go:141] libmachine: (multinode-574866-m02) Calling .GetSSHUsername
	I0723 14:45:01.618896   47335 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19319-11303/.minikube/machines/multinode-574866-m02/id_rsa Username:docker}
	I0723 14:45:01.697476   47335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:45:01.712092   47335 status.go:257] multinode-574866-m02 status: &{Name:multinode-574866-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:45:01.712125   47335 status.go:255] checking status of multinode-574866-m03 ...
	I0723 14:45:01.712444   47335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0723 14:45:01.712482   47335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0723 14:45:01.728090   47335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42373
	I0723 14:45:01.728525   47335 main.go:141] libmachine: () Calling .GetVersion
	I0723 14:45:01.729042   47335 main.go:141] libmachine: Using API Version  1
	I0723 14:45:01.729067   47335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0723 14:45:01.729376   47335 main.go:141] libmachine: () Calling .GetMachineName
	I0723 14:45:01.729555   47335 main.go:141] libmachine: (multinode-574866-m03) Calling .GetState
	I0723 14:45:01.730937   47335 status.go:330] multinode-574866-m03 host status = "Stopped" (err=<nil>)
	I0723 14:45:01.730953   47335 status.go:343] host is not running, skipping remaining checks
	I0723 14:45:01.730960   47335 status.go:257] multinode-574866-m03 status: &{Name:multinode-574866-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-574866 node start m03 -v=7 --alsologtostderr: (39.132342789s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-574866 node delete m03: (1.882479085s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-574866 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0723 14:54:49.700122   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-574866 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.468457721s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-574866 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-574866
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-574866-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-574866-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.954633ms)

                                                
                                                
-- stdout --
	* [multinode-574866-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-574866-m02' is duplicated with machine name 'multinode-574866-m02' in profile 'multinode-574866'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-574866-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-574866-m03 --driver=kvm2  --container-runtime=crio: (38.658181708s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-574866
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-574866: exit status 80 (208.305442ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-574866 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-574866-m03 already exists in multinode-574866-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-574866-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.94s)

                                                
                                    
x
+
TestScheduledStopUnix (113.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-846256 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-846256 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.633364378s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-846256 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-846256 -n scheduled-stop-846256
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-846256 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-846256 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-846256 -n scheduled-stop-846256
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-846256
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-846256 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-846256
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-846256: exit status 7 (64.304852ms)

                                                
                                                
-- stdout --
	scheduled-stop-846256
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-846256 -n scheduled-stop-846256
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-846256 -n scheduled-stop-846256: exit status 7 (64.631161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-846256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-846256
--- PASS: TestScheduledStopUnix (113.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (218.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3071745090 start -p running-upgrade-635207 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0723 15:04:49.699483   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3071745090 start -p running-upgrade-635207 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m0.961214273s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-635207 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-635207 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.445521124s)
helpers_test.go:175: Cleaning up "running-upgrade-635207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-635207
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-635207: (1.195174561s)
--- PASS: TestRunningBinaryUpgrade (218.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-618215 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-618215 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.094454ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-618215] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-618215 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-618215 --driver=kvm2  --container-runtime=crio: (1m34.278520247s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-618215 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.52s)

                                                
                                    
x
+
TestPause/serial/Start (151.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-704998 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-704998 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m31.913846801s)
--- PASS: TestPause/serial/Start (151.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-618215 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-618215 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.03214105s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-618215 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-618215 status -o json: exit status 2 (249.440248ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-618215","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-618215
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-618215: (1.142047806s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-618215 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0723 15:06:54.865124   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 15:07:11.818572   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-618215 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.04432711s)
--- PASS: TestNoKubernetes/serial/Start (51.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-618215 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-618215 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.782171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.553247296s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.802779264s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-704998 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-704998 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.611465371s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-618215
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-618215: (1.302303212s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-618215 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-618215 --driver=kvm2  --container-runtime=crio: (21.023660292s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.02s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-704998 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-704998 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-704998 --output=json --layout=cluster: exit status 2 (232.453779ms)

                                                
                                                
-- stdout --
	{"Name":"pause-704998","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-704998","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-704998 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-704998 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-618215 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-618215 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.784108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-704998 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-704998 --alsologtostderr -v=5: (1.020166307s)
--- PASS: TestPause/serial/DeletePaused (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (34.97s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (34.971693925s)
--- PASS: TestPause/serial/VerifyDeletedResources (34.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-562147 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-562147 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.215605ms)

                                                
                                                
-- stdout --
	* [false-562147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:09:15.094960   59279 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:09:15.095056   59279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:09:15.095066   59279 out.go:304] Setting ErrFile to fd 2...
	I0723 15:09:15.095072   59279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:09:15.095291   59279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-11303/.minikube/bin
	I0723 15:09:15.096006   59279 out.go:298] Setting JSON to false
	I0723 15:09:15.096942   59279 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6701,"bootTime":1721740654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0723 15:09:15.097000   59279 start.go:139] virtualization: kvm guest
	I0723 15:09:15.099317   59279 out.go:177] * [false-562147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0723 15:09:15.100627   59279 notify.go:220] Checking for updates...
	I0723 15:09:15.100645   59279 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:09:15.101979   59279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:09:15.103274   59279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-11303/kubeconfig
	I0723 15:09:15.104428   59279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-11303/.minikube
	I0723 15:09:15.105612   59279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0723 15:09:15.106788   59279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:09:15.108358   59279 config.go:182] Loaded profile config "cert-expiration-457920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:09:15.108469   59279 config.go:182] Loaded profile config "cert-options-534062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:09:15.108572   59279 config.go:182] Loaded profile config "kubernetes-upgrade-503350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0723 15:09:15.108693   59279 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:09:15.146068   59279 out.go:177] * Using the kvm2 driver based on user configuration
	I0723 15:09:15.147471   59279 start.go:297] selected driver: kvm2
	I0723 15:09:15.147489   59279 start.go:901] validating driver "kvm2" against <nil>
	I0723 15:09:15.147504   59279 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:09:15.149598   59279 out.go:177] 
	W0723 15:09:15.150858   59279 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0723 15:09:15.151912   59279 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-562147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-562147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.61:8443
name: cert-expiration-457920
contexts:
- context:
cluster: cert-expiration-457920
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-457920
name: cert-expiration-457920
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-457920
user:
client-certificate: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/cert-expiration-457920/client.crt
client-key: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/cert-expiration-457920/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-562147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562147"

                                                
                                                
----------------------- debugLogs end: false-562147 [took: 2.874448719s] --------------------------------
helpers_test.go:175: Cleaning up "false-562147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-562147
--- PASS: TestNetworkPlugins/group/false (3.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3915866773 start -p stopped-upgrade-193974 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3915866773 start -p stopped-upgrade-193974 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m4.640150255s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3915866773 -p stopped-upgrade-193974 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3915866773 -p stopped-upgrade-193974 stop: (2.149374717s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-193974 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-193974 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.803729536s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-543029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-543029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m27.112961859s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-193974
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (105.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-486436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0723 15:12:11.818575   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-486436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m45.783866752s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (105.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-543029 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [806aa06c-55ed-4855-a400-2cf44deea87b] Pending
helpers_test.go:344: "busybox" [806aa06c-55ed-4855-a400-2cf44deea87b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [806aa06c-55ed-4855-a400-2cf44deea87b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004565929s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-543029 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-543029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-543029 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-486436 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4bf17f9c-04a1-46a7-8164-5c69e8018ed8] Pending
helpers_test.go:344: "busybox" [4bf17f9c-04a1-46a7-8164-5c69e8018ed8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4bf17f9c-04a1-46a7-8164-5c69e8018ed8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003793774s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-486436 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-486436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-486436 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-911217 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-911217 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (55.102630319s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (653.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-543029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-543029 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (10m52.969924895s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-543029 -n no-preload-543029
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (653.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (593.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-486436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-486436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m52.856233514s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-486436 -n embed-certs-486436
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (593.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885] Pending
helpers_test.go:344: "busybox" [5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5caedc4b-4e14-4fd5-9ef8-10ec6d1c0885] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003705132s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-911217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-911217 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-000272 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-000272 --alsologtostderr -v=3: (3.540029765s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-000272 -n old-k8s-version-000272: exit status 7 (63.574511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-000272 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (419.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-911217 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0723 15:19:49.699533   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 15:22:11.818547   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 15:23:34.866052   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 15:24:49.699362   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-911217 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (6m59.127481374s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (419.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-459494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0723 15:39:49.700062   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
E0723 15:40:14.866515   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-459494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (44.534155703s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-459494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-459494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.02129521s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-459494 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-459494 --alsologtostderr -v=3: (2.339381634s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-459494 -n newest-cni-459494
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-459494 -n newest-cni-459494: exit status 7 (64.275851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-459494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-459494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-459494 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (37.574237997s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-459494 -n newest-cni-459494
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m33.436196687s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-459494 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-459494 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-459494 -n newest-cni-459494
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-459494 -n newest-cni-459494: exit status 2 (247.995727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-459494 -n newest-cni-459494
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-459494 -n newest-cni-459494: exit status 2 (248.535329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-459494 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-459494 -n newest-cni-459494
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-459494 -n newest-cni-459494
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m33.737867289s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (123.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0723 15:42:11.818532   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/functional-066448/client.crt: no such file or directory
E0723 15:42:32.998973   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:33.004229   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:33.015179   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:33.035519   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:33.075843   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:33.156430   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:33.317019   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:33.638011   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:34.278599   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:35.559677   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:42:38.120777   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m3.979917976s)
--- PASS: TestNetworkPlugins/group/calico/Start (123.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-562147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-562147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-x6djr" [f2e28bff-ac60-4a1d-90ec-622fd625e3a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0723 15:42:43.241481   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-x6djr" [f2e28bff-ac60-4a1d-90ec-622fd625e3a4] Running
E0723 15:42:53.482668   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005587953s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kbxbk" [30e261b6-7941-40f0-9604-2a68e061eaa4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004996755s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-562147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-562147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-562147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4j8pv" [0ee5c304-1f75-4206-b2bb-c5f8cdb9feed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4j8pv" [0ee5c304-1f75-4206-b2bb-c5f8cdb9feed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005123491s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-562147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (86.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0723 15:43:13.963573   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m26.64137408s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (86.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (116.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m56.346269189s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (116.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nmd7m" [18857bbd-5347-4390-aa14-ddfbba1cd2a8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009102479s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-562147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-562147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-smxf6" [7c836127-682a-45ec-b681-0a0d417bb598] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-smxf6" [7c836127-682a-45ec-b681-0a0d417bb598] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003559698s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-562147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-911217 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-911217 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217: exit status 2 (324.054476ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217: exit status 2 (327.817586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-911217 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-911217 -n default-k8s-diff-port-911217
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)
E0723 15:45:16.844618   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (95.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0723 15:43:54.923902   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/no-preload-543029/client.crt: no such file or directory
E0723 15:43:56.588496   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:56.593782   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:56.604026   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:56.624360   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:56.664932   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:56.746033   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:56.907149   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:57.227350   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:57.868446   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:43:59.148796   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m35.311188607s)
--- PASS: TestNetworkPlugins/group/flannel/Start (95.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0723 15:44:01.709353   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:44:06.829537   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
E0723 15:44:17.070649   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-562147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m29.668758512s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-562147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-562147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hxklb" [22de3141-e129-43f8-b391-90221ff7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0723 15:44:37.551581   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-hxklb" [22de3141-e129-43f8-b391-90221ff7b5ee] Running
E0723 15:44:49.699858   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/addons-566823/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005456956s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-562147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-562147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-562147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5bqz2" [ea7ebe16-2327-4fc8-81cb-68f6186ee97c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0723 15:45:18.512258   18503 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/old-k8s-version-000272/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-5bqz2" [ea7ebe16-2327-4fc8-81cb-68f6186ee97c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004149024s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-562147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-b7hk6" [d0154173-a2f5-4989-ba04-de941f290e8b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004441687s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-562147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-562147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cwpsb" [d3b1ecd5-c926-41ca-97da-f0b991154a69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cwpsb" [d3b1ecd5-c926-41ca-97da-f0b991154a69] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005085337s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-562147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-562147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9wn7s" [2aa08946-3056-4dc2-9c97-f501381d77c8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9wn7s" [2aa08946-3056-4dc2-9c97-f501381d77c8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003861636s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-562147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-562147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-562147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    

Test skip (40/328)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.13
287 TestNetworkPlugins/group/kubenet 2.63
295 TestNetworkPlugins/group/cilium 3.8
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-518198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-518198
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-562147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-562147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.61:8443
name: cert-expiration-457920
contexts:
- context:
cluster: cert-expiration-457920
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-457920
name: cert-expiration-457920
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-457920
user:
client-certificate: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/cert-expiration-457920/client.crt
client-key: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/cert-expiration-457920/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-562147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562147"

                                                
                                                
----------------------- debugLogs end: kubenet-562147 [took: 2.483916606s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-562147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-562147
--- SKIP: TestNetworkPlugins/group/kubenet (2.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-562147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-562147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19319-11303/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.61:8443
name: cert-expiration-457920
contexts:
- context:
cluster: cert-expiration-457920
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-457920
name: cert-expiration-457920
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-457920
user:
client-certificate: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/cert-expiration-457920/client.crt
client-key: /home/jenkins/minikube-integration/19319-11303/.minikube/profiles/cert-expiration-457920/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-562147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-562147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562147"

                                                
                                                
----------------------- debugLogs end: cilium-562147 [took: 3.637504081s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-562147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-562147
--- SKIP: TestNetworkPlugins/group/cilium (3.80s)

                                                
                                    
Copied to clipboard